AI Training Contributor | Response Evaluation | Multimodal Annotation | Prompt Engineering
I contributed to multiple project-based AI training workflows focused on improving model performance across text and multimodal tasks. The scope of the projects involved evaluating, comparing, and refining AI-generated outputs to ensure accuracy, coherence, and alignment with task-specific guidelines. These projects were delivered on an intermittent basis over approximately one year, spanning different task types and evolving requirements. My responsibilities included AI response evaluation, human-to-human (H2H) comparisons, and multimodal annotation tasks such as image-to-text and video-to-text assessments. I also performed dense structured grounding to verify factual alignment, engaged in prompt writing to improve model outputs, and completed quality assurance tasks to identify errors and inconsistencies. The project work was high-volume and iterative, involving repeated task cycles and continuous exposure to new guidelines and evaluation frameworks. This required maintaining consistency across tasks while adapting to changing instructions and quality expectations. To ensure quality, I adhered strictly to detailed annotation guidelines and evaluation rubrics, applied consistency checks across outputs, and focused on identifying subtle errors such as hallucinations, logical inconsistencies, and misalignment with source material. I maintained a high level of attention to detail and contributed to quality assurance processes that ensured reliable and accurate data labeling across project cycles.