AI Trainer & Evaluator Architect – MUDIAN Ltd
In this role, I evaluated LLM and RAG outputs for accuracy, helpfulness, safety, and coherence within enterprise AI systems. I annotated and reviewed AI-generated responses to support ongoing model improvement efforts. Additionally, I produced evaluation rubrics, annotation guidelines, and quality assurance documentation for AI workflows. • Evaluated outputs using structured frameworks and documented assessment. • Annotated complex responses to enhance model feedback cycles. • Led the creation of internal guidelines aligned with industry best practices. • Maintained continuous improvement in output review.