AI Text Annotation & Response Evaluation Project
Worked on AI training and data labeling projects focused on improving the performance of large language models. The project involved annotating and evaluating text-based datasets used for NLP model training. Key responsibilities included: Classifying text data based on intent, topic, and relevance Performing Named Entity Recognition (NER) for people, locations, organizations, and key terms. Evaluating AI-generated responses for accuracy, fluency, and contextual relevance. Comparing multiple model outputs and selecting the best response based on predefined quality metrics. Writing and refining prompt–response pairs for supervised fine-tuning (SFT). Following strict annotation guidelines, quality benchmarks, and confidentiality standards. Handled medium to large-scale datasets while maintaining high accuracy and consistency under tight deadlines.