Crowd Rater/Evaluator
As a Crowd Rater/Evaluator at Appen, I assessed AI-generated text outputs for digital content quality. I ensured outputs matched user intent and relevance by adhering to strict project guidelines. Design thinking and user empathy were applied throughout the evaluation process. • Evaluated a variety of text-based AI model outputs across multiple projects • Maintained high accuracy and consistency while following detailed instructions • Managed multiple microtasks efficiently under tight deadlines • Contributed to refining AI systems by providing human-centered feedback