AI Data Annotator / AI Evaluation Contributor
I reviewed and annotated AI-generated text outputs to help improve the accuracy and quality of machine learning models. My work required a detailed understanding of task guidelines and the ability to evaluate clarity and rule compliance. I independently managed tasks for multiple short-term and ongoing AI evaluation projects and consistently produced reliable, high-quality written assessments. • Labeled and reviewed AI outputs for clarity, completeness, and guideline adherence. • Identified inconsistencies, edge cases, and missing assumptions in responses. • Produced structured reviews and annotations using platform-specific workflows. • Maintained high standards for quality and reliability in remote AI annotation environments.