Text Annotation & AI Output Evaluation
Performed text annotation and AI output evaluation tasks to support training and assessment of language models. Responsibilities included classifying text by intent and relevance, rating AI-generated responses based on accuracy and completeness, validating structured text inputs, and ensuring adherence to detailed annotation guidelines. Handled high-volume datasets requiring consistency, attention to edge cases, and strict quality control. Maintained high accuracy across repetitive tasks and corrected labeling inconsistencies to improve dataset reliability.