Multimodal Data Evaluator
As a Multimodal Data Evaluator, I led annotation projects involving the labeling of text, images, and computer-vision trajectories to develop high-quality training data for AI agents. I recorded and annotated workflows in Linux environments to support human-in-the-loop tasks for training AI to navigate software interfaces. Collaborating directly with project managers, I refined labeling guidelines for client-specific quality benchmarks. • Executed multimodal annotation tasks including text, image, and computer vision labeling. • Used Linux desktop setups to support human-in-the-loop recording and annotation. • Refined and improved data quality metrics and labeling guidelines through collaboration. • Raised data annotation standards for specialized client AI systems.