AI Data Labeler & Model Evaluator – Outlier AI
Annotated and evaluated AI outputs across complex technical prompts for consistency and quality. Ensured high signal quality using structured rubrics and detailed benchmarks. Provided iterative feedback to assist in model refinement and improvement. • Focused on prompt-based evaluation of AI-generated text • Performed detailed labeling for technical task assessment • Collaborated in ongoing model enhancement cycles • Maintained strict data quality standards throughout