AI Quality Evaluator (Remote) — Atlas Capture
This role involved evaluating AI-generated responses according to specific annotation guidelines. I assessed the accuracy, relevance, and factual reliability of text-based AI outputs to enhance machine learning datasets. The position required applying detailed scoring frameworks to improve the reliability of annotated data. • Evaluated AI-generated responses for logical consistency and accuracy. • Applied structured guidelines for annotation and scoring. • Identified and reported quality deviations in AI model outputs. • Maintained high productivity and precision in compliance with benchmarks.