AI / Agent Quality Evaluator, Data Labeling and Annotation Contractor
I contributed to AI training by reviewing model outputs and labeling datasets through freelance microtask platforms and AI evaluation roles. My responsibilities included analyzing AI-generated responses for quality, accuracy, and relevance in order to improve models and enrich training datasets. I also conducted annotation of text, images, and digital content to meet project standards and guidelines. • Evaluated AI/agent-generated text responses for safety, accuracy, and clarity. • Performed content categorization, moderation, and identification of labeling errors. • Followed detailed annotation instructions to ensure data quality and consistency. • Provided structured feedback to enhance AI performance and dataset reliability.