LLM Data Labeling and Evaluation Project
Contributed to a large-scale data labeling project supporting the training and fine-tuning of large language models (LLMs). Responsibilities included evaluating AI-generated text for accuracy, coherence, and relevance; categorizing and annotating text data according to detailed linguistic and contextual guidelines; and maintaining high-quality standards through consistent review and feedback. Collaborated with a distributed team to meet accuracy and throughput goals, achieving over 95% quality