Data Annotation & AI Evaluation (RLHF) Freelancer
I evaluated AI-generated responses using human feedback to improve text generation models. The role required a basic understanding of natural language processing (NLP) and careful attention to detail while following explicit criteria. My work focused on rating, ranking, and providing quality feedback for machine-generated outputs. • Analyzed and rated multiple AI-driven text responses each week. • Ensured unbiased and thorough evaluation following project guidelines. • Provided actionable feedback to enhance AI accuracy and relevance. • Worked independently and adapted to evolving requirements.