AI Data Annotator (LLM Evaluation & Text Labeling)
Worked on AI training and data annotation projects focused on improving large language model (LLM) performance. Tasks included text classification, sentiment analysis, and prompt-response evaluation based on predefined quality guidelines. Reviewed and labeled AI-generated outputs for accuracy, relevance, coherence, and safety compliance. Identified edge cases and inconsistencies, contributing to improved model behavior and response quality. Maintained high annotation accuracy while handling high-volume tasks, ensuring consistency across datasets. Collaborated with evolving guidelines and participated in quality control processes to meet project standards.