AI Data Labelling & Model Evaluation – Outlier: Aether Project
I performed high-precision text annotation tasks as part of the Outlier: Aether Project. My responsibilities included annotation, classification, rewriting, safety evaluation, and multi-step reasoning assessment on large language model outputs. I contributed to model fine-tuning efforts through detailed quality analysis and iterative reporting. • Executed text classification, rewriting, and safety evaluations. • Assessed model outputs for coherence, factual accuracy, and guideline adherence. • Generated error reports and identified annotation patterns. • Assisted in updating annotation taxonomies and training datasets.