Data annotation
Worked on the Multimango project at Outlier, focused on evaluating and improving multimodal AI systems that generate image and text responses. The scope involved reviewing AI-generated outputs, comparing multiple responses, and assessing performance across instruction-following, visual accuracy, realism, and language quality. Performed detailed data labeling tasks including identifying object count discrepancies, detecting visual inconsistencies, evaluating contextual alignment between prompts and outputs, and rating responses using structured quality rubrics. Provided concise justification summaries to support evaluation decisions and ensure feedback was clear, consistent, and human-like. Contributed to large-scale AI training workflows by maintaining strict quality standards, adhering to annotation guidelines, and meeting accuracy benchmarks. Demonstrated strong attention to detail, analytical thinking, and consistency in delivering high-quality annotations for model refinement.