AI Text Annotation & Evaluation Practice
Completed structured annotation tasks for supervised AI training datasets, including classification, rating, and response evaluation. Applied strict labeling guidelines to ensure consistency across edge cases and ambiguous prompts. Evaluated AI outputs for instruction adherence, factual accuracy, and relevance to user intent. Focused on maintaining high inter-annotator consistency through careful rubric interpretation and systematic decision-making across varied text domains.