AI Model Trainer – Outlier AI & DataAnnotation
As an AI Model Trainer at Outlier AI & DataAnnotation, I evaluated and scored AI-generated responses on various tasks. My role focused on designing complex programming challenges and applying structured rubrics to assess AI performance and logic. I enhanced AI model reasoning by providing detailed feedback and identifying logical and coding errors. • Evaluated correctness, logic, and code quality of AI outputs. • Designed rigorous programming tasks in Python and JavaScript for LLM testing. • Detected logic gaps, hallucinations, and solution errors in AI-generated code. • Delivered structured Chain-of-Thought feedback to boost model reasoning.