AI Model Evaluator / Data Annotator
Responsible for assessing and evaluating AI-generated textual outputs for quality and accuracy. Followed structured guidelines to ensure consistency in annotation and feedback for LLM and AI agent system development. Delivered evidence-based assessments and supported the improvement of natural language understanding models. • Performed regular LLM output quality assurance. • Provided prompt reviews and critical evaluations of model behavior. • Flagged anomalies, inconsistencies, and error patterns in generated data. • Maintained documentation and followed annotation workflows precisely.