AI Data Annotator / Evaluator
As an AI Data Annotator/Evaluator at Mindrift and Outlier, I evaluated and provided feedback on AI/LLM-generated outputs spanning code, text, and structured data. My work involved reinforcement learning from human feedback, annotation of programming code, content review, and image quality evaluation. I maintained consistency using detailed rubrics and flagging policy-violating outputs to ensure the integrity of model responses. • Evaluated Python and SQL code snippets for accuracy, style, and correctness. • Ranked, rated, and rewrote AI-generated responses as part of RLHF tasks. • Reviewed AI outputs across technical domains, including data pipelines and cloud concepts. • Performed image annotation and assessed image quality for photorealism and accuracy.