Data Labeling & Human Feedback Specialist
Labeled and curated high-quality training data for large-scale machine learning models, including NLP and reasoning-focused tasks used in production AI systems. Applied detailed annotation guidelines to produce consistent, low-noise labels across complex, ambiguous inputs. Participated in iterative guideline refinement to reduce variance, bias, and ambiguity in labeled datasets. Conducted quality audits and disagreement analysis to identify systematic labeling errors and edge cases impacting downstream model performance. Collaborated with researchers and project leads to translate qualitative human judgments into structured signals suitable for model training and evaluation. Supported active learning and feedback loops by prioritizing difficult or high-impact examples for annotation.