Data Annotator
Worked as a Generalist Data Annotator on the Aether Project at Outlier AI, contributing to the training and evaluation of large language models (LLMs). Responsibilities included reviewing AI-generated responses, performing text classification, ranking outputs based on quality and accuracy, and identifying factual, logical, and linguistic issues. Tasks involved data labeling, response comparison, prompt evaluation, and annotation following detailed project guidelines. The project required maintaining high accuracy standards, consistency across annotations, and adherence to quality assurance protocols through periodic reviews and feedback cycles. Contributed to improving model alignment, response reliability, and overall performance across diverse subject areas.