AI Data Evaluator
I worked on a large-scale data labelling and annotation project supporting machine-learning model training and evaluation. My role involved annotating text, image, and short video/GIF data through tasks such as classification, tagging, entity recognition, and comparison-based evaluation, in line with detailed project guidelines. The project operated at scale, comprising tens of thousands of data units, with individual throughput of hundreds of annotations per day across multiple task batches and tight deadlines. Quality was ensured through strict guideline adherence, inter-annotator agreement checks, gold-standard tasks, and regular audits. I consistently met required accuracy thresholds (95% quality scores) and incorporated reviewer feedback to maintain high consistency and reliability across datasets.