AI Data Quality Analyst – Written Content Evaluation
Oversaw evaluation of large volumes of written content for accuracy, reasoning, and consistency. Applied structured QA frameworks to assess and enhance the quality of AI-generated and human-written responses. Delivered actionable feedback to improve data quality and AI output reliability. • Conducted error detection and pattern recognition across textual data • Used rubrics and guidelines for consistent evaluation • Collaborated with teams to address ambiguous cases and edge scenarios • Enhanced AI output by identifying recurring response issues