AI Data Annotation & Quality Reviewer (Freelance / Project-Based)
As an AI Data Annotation & Quality Reviewer, I labeled and reviewed text datasets focused on classification, relevance, and safety or quality tags using strict rubrics and detailed guideline application. I maintained label consistency through rechecking, documenting rationale, and escalating ambiguous examples to ensure accurate and actionable AI model supervision. My work included producing structured quality reports and implementing defect logs, self-QA, and reviewer notes to drive dataset quality and reduce labeling errors. • Executed text classification and relevance labeling with strong adherence to guidelines. • Produced quality and safety tags and wrote clear explanations for borderline cases. • Detected and documented common annotation failures or edge cases with actionable feedback. • Managed annotation workflow using Label Studio or internal proprietary tools for model training and evaluation.