Data Annotator / AI Quality Evaluator
As a Data Annotator and AI Quality Evaluator, I analyzed and rated AI-generated text responses for accuracy and helpfulness. My work involved designing and assessing multi-turn prompts and performing side-by-side model output comparisons to ensure clarity and user relevance. I focused on grounding, personalization, and error identification within conversational AI systems. • Evaluated AI outputs for contextual appropriateness, factual grounding, and response quality. • Conducted SxS (side-by-side) model comparisons to rank outputs based on integration and clarity. • Designed prompts to test reasoning, memory, and user personalization. • Maintained rigorous documentation and high data integrity standards throughout the annotation process.