AI Response Evaluation & Text Annotation Project
Worked on data labeling and AI training tasks involving text annotation, classification, and response evaluation. Labeled datasets based on sentiment, intent, and topic categories using structured annotation guidelines to ensure consistency and high-quality outputs. Evaluated and ranked AI-generated responses by comparing outputs for accuracy, clarity, relevance, and instruction-following. Identified errors, inconsistencies, and biases, and provided detailed feedback to improve model performance. Applied prompt engineering techniques to design and refine prompts, improving the quality and reliability of AI responses through iterative testing. Maintained labeled datasets using tools such as Excel and JSON formats, ensuring proper data organization and quality control. Demonstrated strong attention to detail, adherence to guidelines, and the ability to deliver consistent and accurate results.