AI Content Evaluator & Annotator
As an AI Content Evaluator and Annotator, I assessed AI-generated responses for human-likeness, emotional tone, and realism. My work involved structured annotation, error flagging, and refinement of conversational models. I focused on improving the authenticity and engagement in virtual dialogue systems. • Evaluated responses using structured rubrics to identify robotic patterns. • Annotated data for empathy, tone, and conversational consistency. • Flagged and categorized dialogue inconsistencies with attention to nuance. • Enhanced the believability of AI characters for virtual persona projects.