AI Data Specialist (Meta)
Evaluated large language model (LLM) outputs for conversational quality, factual accuracy, and logical soundness. Used structured taxonomies and standardized rubrics to annotate reasoning, tone, and completeness of AI-generated responses. Delivered pairwise comparisons and fine-grained feedback to support reinforcement learning and model optimization. • Fact-checked model responses using trusted sources. • Annotated strengths, weaknesses, and inconsistencies in AI outputs. • Maintained high inter-annotator agreement and reproducible scoring. • Produced evaluation artifacts to improve deployment readiness.