Data Annotation & AI Response Evaluation (Student Project/Practice)
I contributed to data annotation and labeling projects, focusing on evaluating and assessing AI responses. My tasks involved critical thinking and making judgments to improve model outputs and ensure high-quality annotated datasets. This experience allowed me to develop attention to detail and apply basic data analysis skills in a data-labeling environment. • Evaluated and rated AI-generated text responses for accuracy and quality • Utilized Microsoft Excel and Google Sheets for managing annotation data • Leveraged AI tools to support the annotation process • Applied research skills and fast reading/comprehension for efficient review