Data Annotation
Worked on high‑quality AI data labeling projects, carefully reviewing prompts and model responses, annotating text according to detailed client guidelines, and applying consistent rating criteria for relevance, accuracy, safety, and tone. Supported training and evaluation of large language models by tagging, classifying, and correcting examples, resolving edge cases with good judgment, and flagging unclear instructions for clarification. Maintained high productivity while meeting strict quality benchmarks, followed written SOPs and tool-specific workflows, and collaborated with reviewers to improve overall data quality.