Data Annotation – AI Model Quality Reviewer
I participated in a range of projects focused on providing structured feedback to enhance the quality of AI-generated responses. My work entailed reviewing and comparing AI outputs along multiple axes, such as instruction following, helpfulness, truthfulness, relevance, safety, and tone. Projects included fact-checking response claims, creative writing assessments, and providing guidance for LLMs in complex electronics topics. • Conducted comprehensive evaluations of AI model outputs for relevance and factual accuracy. • Applied rubrics and taxonomies to assess and compare AI responses in various domains. • Used STEM knowledge and critical reasoning to verify claims and technical details in AI-generated content. • Gained insight into AI model limitations, including hallucinations and aspects of safety and appropriateness.