AI Response Evaluation & Data Annotation (Freelance / Practice)
I evaluated AI-generated responses for accuracy, clarity, and relevance, comparing outputs and selecting the best with justifications. My work involved identifying factual and logical errors in content and applying structured evaluation criteria similar to RLHF and Evals tasks. I provided detailed feedback to improve AI-generated answers and participated in practice and freelance annotation projects. • Applied evaluation tasks on AI language model outputs • Conducted content quality assessments and error detection • Compared and rated multiple AI response outputs • Delivered feedback for AI model improvement