AI Response Evaluator (Outlier)
I reviewed and evaluated AI-generated responses for Outlier, assessing them for accuracy, clarity, and compliance with provided guidelines. The work focused on the evaluation of large language model outputs, requiring a critical eye for correctness and relevance. Tasks included rating, feedback, and judgment on output quality. • Evaluated AI-generated text outputs for correctness and clarity. • Provided structured feedback to improve language model performance. • Ensured adherence to project and quality compliance standards. • Managed short-term, remote evaluation tasks under strict deadlines.