Data Annotator
In this role, I annotated and evaluated AI-generated dialogue and narrative content for large language model training and refinement. My responsibilities included reviewing and rating model outputs based on detailed evaluation criteria such as coherence, accuracy, tone, and contextual relevance. I provided comprehensive feedback and explanations to enhance model performance and guide future training iterations. • Consistently followed complex annotation guidelines while ensuring high-quality standards. • Worked with diverse content types and subject matters, contributing to model robustness. • Assisted in developing new evaluation rubrics as linguistic challenges arose. • Communicated detailed assessment rationales to engineering teams.