Previos Experience.
I worked on a large-scale data labeling project focused on LLM-related text annotation. The scope included annotating and classifying thousands of text samples across diverse topics to support model training and evaluation. Tasks covered entity recognition, classification, translation/localization, and prompt–response writing (SFT) for high-quality LLM outputs. The dataset exceeded 50,000+ text items with varied complexity (short prompts, long-form answers, multilingual content). I ensured high accuracy and consistency through cross-checking, adherence to strict labeling guidelines, and leveraging QA feedback loops. Quality measures included double-review processes, inter-annotator agreement checks, and continuous calibration to maintain >95% accuracy. Tools used: Data Annotation Tech platform with built-in QA dashboards and analytics to track progress and quality. Industry focus: Large Language Models / AI / NLP.