AI Text Annotation & Quality Evaluation for LLM Training
Worked on AI text annotation and response evaluation tasks to improve large language model (LLM) performance. Responsibilities included reviewing AI-generated responses for accuracy, relevance, clarity, and adherence to given guidelines. Performed classification of text outputs, rated model responses based on predefined rubrics, and identified inconsistencies or hallucinations. Contributed to supervised fine-tuning (SFT) by crafting high-quality prompt-response pairs and performing reinforcement learning from human feedback (RLHF) tasks. Ensured strict compliance with annotation guidelines, maintained high accuracy standards, and met daily productivity targets. Applied attention to detail and analytical reasoning to maintain data integrity and consistency across batches.