AI Data Annotator & Model Evaluator
As an AI Data Annotator & Model Evaluator at Outlier AI, I evaluated and annotated outputs from large language models using rubric-based frameworks. I identified failure modes, edge cases, and inconsistencies to support model refinement and benchmarking. I provided structured feedback and participated in calibration sessions to maintain alignment with evolving standards. • Evaluated text and image outputs against defined rubrics for quality and consistency. • Flagged ambiguous and inconsistent model responses with detailed supporting rationale. • Adapted to changing guidelines quickly during calibration sessions. • Communicated findings to stakeholders for informed decision-making.