Text-Based Data Labeling & AI Response Evaluation
Worked on multiple text-based data labeling and AI training projects focused on improving large language model performance and reliability. Tasks included classifying and labeling text data, evaluating AI-generated responses for accuracy, relevance, tone, and safety, and performing side-by-side comparisons of model outputs under detailed guidelines. Additional responsibilities included prompt and response writing for supervised fine-tuning (SFT), summarization and rewriting tasks, search relevance evaluation, and structured feedback for reinforcement learning from human feedback (RLHF). Quality standards emphasized consistency, adherence to annotation rubrics, plagiarism avoidance, hallucination detection, and careful review of sensitive or misleading content. All work was completed following platform-specific quality controls and validation requirements.