AI Training Data Annotation & Quality Review (Outlier AI)
Contributed to AI training and evaluation projects for large language models through Outlier AI. Responsibilities included annotating, reviewing, and evaluating text-based data according to detailed project guidelines to improve model accuracy, safety, and response quality. Tasks involved classification, comparison, and quality assessment of AI-generated outputs, with a strong focus on consistency and attention to detail. Adhered to strict quality standards, incorporated feedback from reviewers, and met productivity benchmarks in a fast-paced, remote environment. This work required strong analytical skills, clear judgment, and the ability to follow evolving annotation instructions, contributing to high-quality training data used in real-world AI systems.