AI Rater & Content QualityAnalyst
Evaluated and labeled 1,000+ data tasks weekly across text classification, reasoning validation, and conversational AI training projects Reviewed and rated AI-generated responses for factual accuracy, coherence, safety compliance, and instruction alignment Identified hallucinations, logical inconsistencies, and bias risks in model outputs Applied detailed annotation guidelines across multi-step reasoning and edge case scenarios Maintained 98% average quality rating across quarterly review cycles Provided structured feedback used to improve model training pipelines