Freelance AI Data Annotator & Evaluation Specialist
In this role, I reviewed and annotated AI-generated outputs for training and evaluation purposes. I consistently applied complex guidelines and rubrics across multiple platforms to ensure high-quality results. My work focused on improving model accuracy and identifying subtle errors in AI behavior. • Performed structured evaluation and feedback for LLM outputs • Applied multi-step rubric scoring to diverse text data • Identified inconsistencies and edge cases in AI system responses • Maintained accuracy across large task volumes and provided feedback on unclear instructions.