AI Data Annotation and Model Evaluation Specialist
Evaluated AI-generated responses for accuracy, clarity, and completeness in remote data annotation projects. Applied structured evaluation rubrics to score outputs and provided written justifications for decisions. Identified logical fallacies and factual inaccuracies in AI and LLM outputs as part of model evaluation. • Validated step-by-step math solutions for reasoning integrity • Reviewed and corrected computer code outputs • Ensured consistency and adherence to instruction in LLM responses • Followed strict quality assurance guidelines