AI Response Evaluation & Prompt-Based Content Analysis
Evaluated AI-generated responses for accuracy, relevance, clarity, and overall quality based on structured guidelines. Performed prompt-based analysis to assess how effectively AI systems interpret and respond to different user inputs. Tasks included rating responses, identifying inconsistencies, improving prompt clarity, and ensuring outputs align with user intent. Maintained high standards of data quality by applying logical reasoning, attention to detail, and consistency across evaluations. Contributed to improving AI model performance by providing structured feedback and identifying areas for enhancement in response generation and contextual understanding.