AI Model Evaluator & Data Annotator
As an AI Model Evaluator and Data Annotator at Outlier AI and Handshake AI, I evaluated AI model outputs for accuracy, coherence, and completion. I labeled and annotated over 300 code and natural language samples, contributing to AI systems training and improvement. My responsibilities included structured performance evaluation, identification of edge cases, and detailed feedback to inform iterative model enhancement. • Executed detailed annotation and star labeling of datasets • Applied evaluation rubrics for accuracy, safety, and instruction following • Processed 400+ text and code samples under quality assurance protocols • Generated actionable feedback for model refinement and safety