AI Content Evaluator | DataAnnotation
As an AI Content Evaluator at DataAnnotation, I supported Reinforcement Learning from Human Feedback (RLHF) initiatives. My primary responsibility was to apply analytical rigor when evaluating and improving the quality of AI-generated text content. I provided structured human feedback to guide machine learning model development and enhance large language model performance. • Evaluated diverse AI-generated responses and assessed their accuracy, relevance, and coherence. • Delivered feedback through established guidelines and rating criteria for RLHF projects. • Contributed to ongoing model refinement cycles by identifying strengths and areas for improvement in AI outputs. • Collaborated with teams to optimize feedback workflows and uphold best practices.