AI Text Data Annotation & Evaluation Project
Worked on multiple text-based data labeling and evaluation tasks to support the training and improvement of large language models. Responsibilities included classifying and categorizing text data, reviewing AI-generated responses for correctness, relevance, tone, and policy compliance, and providing structured feedback to improve model alignment. Tasks were completed following detailed annotation guidelines, with emphasis on accuracy, consistency, and quality control. Regular quality checks were performed to ensure annotations met required standards and project benchmarks