AI Data Annotation & Evaluation Contributor
Contributor working on AI training and evaluation projects through Aurora Studio at Lionbridge AI. Responsible for labeling and reviewing text, image, and audio datasets used in machine learning model development. Tasks include classification, entity recognition (NER), response evaluation, and rating AI-generated outputs for accuracy, relevance, and compliance with detailed project guidelines. Regularly perform structured quality checks to ensure consistency and high annotation accuracy. Evaluate AI responses for logical correctness, instruction adherence, and overall output quality, contributing to the improvement of large language models. Adapt quickly to new task types, evolving guidelines, and performance benchmarks while maintaining productivity in a remote, task-based environment.