AI Data Quality Analyst (LLM)
Worked on an LLM data operations project focused on improving model accuracy, consistency, and safety through high-quality human feedback. Used Labelbox to manage end-to-end data workflows, including task configuration, annotation, review, and quality assurance. Annotated and evaluated LLM outputs across multiple task types, including prompt–response relevance, instruction following, factual accuracy, reasoning quality, and tone alignment. Applied detailed labeling guidelines to classify errors such as hallucinations, logical inconsistencies, missing context, and unsafe or biased responses. Participated in multi-stage QA workflows inside Labelbox, performing cross-review, disagreement resolution, and calibration checks to ensure annotation consistency. Flagged ambiguous prompts and edge cases, contributing feedback that helped refine task definitions and labeling rubrics. Collaborated with QA reviewers and project leads to maintain high precision standards.