Multimodal AI Data Annotation & Model Evaluation Project
Contributed to large-scale AI training projects involving multimodal datasets that combined text, images, and video. Responsibilities included detailed annotation and evaluation of visual and language-based model outputs, with a focus on object and subject identification, scene understanding, emotional context, and narrative coherence. Performed response evaluation and ranking to support reinforcement learning from human feedback (RLHF) and supervised fine-tuning (SFT). Applied strict annotation guidelines, quality assurance checks, and consistency validation to ensure high data reliability. Collaborated with distributed AI research teams to refine datasets used in model training and performance optimization.