Data Annotator
Served as a contributor on a large-scale human-in-the-loop AI training project focused on improving multimodal model performance. Conducted structured evaluation and annotation of image and video data, including fine-grained visual comparison, discrepancy detection, and detailed semantic description. Assessed AI-generated outputs for accuracy, consistency, and alignment with task specifications, providing high-quality labeled data to support supervised learning and model validation workflows. Operated within dynamic guidelines, maintained strict quality thresholds, and demonstrated strong attention to edge cases and visual detail in a production AI data pipeline.