AI Evaluator – Sama
As an AI Evaluator at Sama, I reviewed and rated AI-generated content for quality and reliability. My work required careful annotation of datasets with a strong focus on identifying errors, bias, and inconsistencies. I consistently ensured all outputs met defined project standards. • Annotated diverse textual datasets to train and assess AI systems. • Conducted quality checks and rated outputs based on accuracy and relevance. • Reported data issues to improve AI performance and reduce systemic bias. • Collaborated with quality assurance teams to maintain annotation guidelines.