AI Data Annotation & Quality Evaluation
I worked on annotating and reviewing AI-generated content to improve the performance of language models and search engines. Tasks included classifying text for relevance, identifying entities (NER), evaluating AI responses for safety, accuracy, and helpfulness, and providing feedback to refine training data. I managed datasets of varying sizes, consistently ensuring high-quality outputs through attention to detail and adherence to project guidelines. My work directly contributed to improving model performance in e-commerce product categorization and general NLP tasks.