AI Trainer and Reviewer
Worked on AI model evaluation and training projects at Outlier, focusing on prompt analysis, rubric-based assessment, and identification of planned failure modes to improve model reliability and response quality. Applied structured guidelines to rewrite prompts, create MECE evaluation criteria, and verify outputs for accuracy, clarity, and safety across diverse use cases. Contributed to iterative model improvement by delivering detailed feedback, edge-case analysis, and self-verifiable evaluations aligned with real-world user needs.