Scoring Rubric Designer for AI Training Tasks
I designed scoring rubrics for AI training tasks focused on ensuring consistency and accuracy across annotated datasets. My primary responsibility was to evaluate and rate AI-generated outputs according to these rubrics, providing structured feedback to improve model performance. This work enhanced the reliability of AI-generated results across multiple datasets. • Developed highly-detailed, standardized rubrics for annotator guidance. • Conducted rigorous evaluation of AI outputs to identify errors and inconsistencies. • Provided structured feedback to data annotators and model trainers. • Collaborated with cross-functional teams to refine evaluation criteria.