AI Data Annotator | TELUS International
This role focused on evaluating and annotating outputs from large language models (LLMs) to ensure their safety, fairness, and transparency. Tasks involved designing robust evaluation rubrics and performing assessments aligned with global regulatory standards. Bias mitigation and traceability were key aspects of the evaluation work. • Conducted detailed analysis of complex LLM output data. • Created and implemented systematic evaluation rubrics. • Ensured model outputs met fairness, ethical, and safety benchmarks. • Collaborated with cross-functional teams for AI governance alignment.