AI Model Evaluation & Data Annotation – Outlier, TELUS International
Evaluated AI model outputs for relevance and quality using established, task-specific rubrics. Conducted data annotation through prompt–response evaluation and relevance judgment tasks. Designed and tested prompts as well as rubrics to compare and assess AI model behaviors. • Provided structured feedback on model strengths and weaknesses. • Identified points of model failure through systematic output testing. • Supported model improvement with targeted evaluation insights. • Collaborated with AI teams in model assessment and annotation processes.