Independent AI Interaction & Evaluation Practice
I engaged with AI language models to evaluate response quality, coherence, and factual accuracy in real-time conversations. My responsibilities included assessing AI outputs for logical consistency, factuality, and adherence to prompt instructions. I consistently applied structured feedback to support model improvement in line with best practices for AI annotation. • Evaluated AI-generated text for accuracy and relevancy • Identified hallucinations and ambiguities in language model outputs • Practiced nuanced prompt and response evaluation aligned with guidelines • Provided written feedback to improve model performance