AI Training Contributor – Data Annotation & Model Evaluation
As an AI training contributor, I evaluated and annotated AI-generated outputs to improve model quality and relevance. My work involved ranking responses, identifying low-quality or harmful content, and following advanced evaluation guidelines. I maintained high accuracy through careful attention to detail across various text and multimodal tasks. • Evaluated the accuracy and relevance of AI outputs • Ranked text responses and flagged errors or risks • Applied structured guidelines for consistent labeling • Reviewed and maintained quality in large datasets