AI Trainer / Data Annotation & RLHF Evaluator (Freelance / Platform Work)
Actively contributed to AI training projects with a focus on RLHF, prompt evaluation, annotation guidelines, and LLM output assessment. Applied annotation guideline calibration and error detection techniques in current platform work including Outlier, Appen, Scale AI, and Alignerr. Evaluated, annotated, and rated AI-generated content with an emphasis on clear, structured, and accurate feedback. • Practiced prompt assessment and model output review across multiple annotation platforms. • Applied RLHF evaluation to ensure safety and quality in LLM outputs. • Developed working knowledge of annotation standards and calibration processes. • Managed tasks using Outlier and similar proprietary task interfaces.