AI Text Annotation & Model Evaluation Project
Worked on AI training projects focused on improving large language models through high-quality text annotation and response evaluation. Tasks included labeling user prompts by intent and sentiment, evaluating AI-generated responses for correctness, relevance, and safety, and identifying hallucinations or logical inconsistencies. The project involved processing large datasets while strictly following detailed annotation guidelines and quality benchmarks. Regular quality checks were conducted to ensure consistency and accuracy across all labeled data