Data Annotator | Outlier
As a Data Annotator for Outlier, I worked on the Aether project focusing on annotating and reviewing AI-generated outputs. My responsibilities included evaluating AI model responses for reasoning quality, detecting hallucinations, and ensuring response quality adhered to project guidelines. I maintained strict annotation standards to deliver consistent and high-precision evaluations that improved the reliability and factuality of the models. • Assessed AI outputs for logical reasoning, factual accuracy, and alignment with annotation guidelines. • Identified and documented errors, edge cases, and hallucinations to provide structured feedback. • Delivered ongoing evaluations under tight quality and consistency standards. • Supported model improvements by contributing actionable analysis and model assessment insights.