LLM Training & Text Annotation for NLP and Model Evaluation
Contributed to large-scale AI training projects focused on improving the performance and reliability of large language models (LLMs). Responsibilities included annotating and reviewing text data for tasks such as intent classification, question answering, summarization, and prompt–response generation. Performed qualitative evaluation and rating of model outputs based on accuracy, relevance, coherence, and safety guidelines, including RLHF-style feedback.