AI Data Annotator & Code Evaluator
As an AI Data Annotator and Code Evaluator at Outlier AI, I assessed and rated large language model outputs for coding, reasoning, and language tasks. My work included annotating and improving AI-generated content to enhance accuracy and logical flow. Technical judgment and structured feedback were essential to meet rigorous quality standards.• Evaluated and rated Python, JavaScript, and reasoning responses generated by LLMs.• Annotated AI-generated outputs across coding, mathematics, and language tasks.• Delivered structured feedback under task-specific rubrics to improve model output.• Consistently met deadlines and maintained high performance under tracked conditions.