AI Training Data Contributor - RLHF Evaluator
I performed RLHF tasks by reviewing and evaluating AI-generated text responses using detailed rubrics. These evaluations focused on criteria like accuracy, helpfulness, tone, and safety to fine-tune language models. The work directly impacted the refinement of leading generative AI systems. • Rated model responses for quality and appropriateness • Provided structured feedback to improve output reliability • Maintained over 95% acceptance rate on tasks • Contributed to the advancement of large language models