AI Evaluation and Data Quality Contributor
I evaluated outputs from AI models for accuracy, logical reasoning, and alignment with project criteria on confidential projects. My work included performing structured data annotation, validation, and quality review tasks as part of remote annotation teams utilizing Mercor and TELUS platforms. I applied mathematical reasoning and quality assurance to improve data reliability and output integrity. • Labeled and annotated diverse text data used in model training and evaluation • Systematically reviewed and rated AI-generated responses for accuracy and alignment • Detected and documented errors, inconsistencies, and model weaknesses • Contributed to data quality workflows for ongoing model improvement