AI Data Annotation & Labeling – LLM & Multimodal Tasks
Worked on AI data annotation and labeling tasks to support Large Language Model (LLM) and multimodal AI training. The project involved evaluating and generating text-based responses, prompt–response writing (SFT), rating model outputs, and performing quality checks based on defined guidelines. Responsibilities included accuracy-focused evaluation, relevance scoring, consistency checks, and adhering to project quality standards to improve overall model performance.