AI Text Annotation & Response Evaluation Project
The project is focused on evaluating and annotating AI-generated text responses to improve model quality and alignment. Tasks included rating responses for relevance, clarity, factual accuracy, tone, and logical consistency based on structured guidelines. I categorized outputs, flagged misleading or biased content, and provided written feedback explaining rating decisions. The project involved reviewing hundreds of prompt-response pairs and maintaining high consistency across annotations. Quality control measures included self-review, cross-checking edge cases, and strict adherence to defined scoring rubrics to ensure reliable training data.