Remote Quality Analyst and AI Evaluator, Crowdgen by Appen and OneForma
As a Remote Quality Analyst and AI Evaluator for Crowdgen by Appen and OneForma, I assessed AI-generated conversations for accuracy, tone, and adherence to guidelines. I applied evaluation rubrics and frameworks to provide structured feedback for model improvement. My responsibilities included scoring conversations, writing justifications, and maintaining a high level of accuracy and compliance. • Evaluated and scored over 200 AI-generated conversations weekly. • Provided written justifications for all scoring decisions. • Adapted to new guidelines and AI domains rapidly. • Maintained a zero flagged submission record, showcasing quality and reliability.