SAMPLE DATA ANNOTATION PROJECT
This project involved evaluating and annotating AI-generated text responses to improve their quality and accuracy. I worked on comparing multiple responses to the same prompt, ranking them based on criteria such as relevance, correctness, clarity, and completeness. I also performed error annotation by identifying issues in responses and rewriting improved versions. The data was organized in spreadsheets, where I labeled each entry and added notes to ensure consistency and transparency. The main goal of the project was to simulate real-world AI training tasks and develop skills in applying evaluation guidelines, maintaining consistency, and producing high-quality labeled data.