Outlier – AI Prompt Evaluation and Text Annotation
Worked as a contributor on the Outlier platform, evaluating AI-generated text responses and annotating prompts for fine-tuning large language models. Tasks included classifying responses based on relevance, coherence, and factual accuracy, writing or rewriting prompts and completions, and rating outputs according to given guidelines. Maintained high attention to detail and followed strict quality instructions to ensure consistency and performance across tasks. Project involved English and French content, and required understanding of linguistic nuances and logical structure.