AI Text Evaluation & Data Labeling Project
Worked on AI training and data labeling projects focused on evaluating, rating, and classifying text data used to train large language models. Tasks included reviewing AI-generated responses, assessing accuracy, relevance, tone, and guideline compliance, and providing structured feedback to improve model performance. The work required strict adherence to detailed instructions, consistency across large task volumes, and high-quality standards. Quality measures included self-review, consistency checks, and compliance with project-specific scoring rubrics.