AI Data Annotator & Content Evaluator (Freelance)
As an AI Data Annotator & Content Evaluator, I reviewed and labelled over 5,000 datasets across text, image, and instructional samples to meet stringent quality benchmarks. My responsibilities included evaluating AI-generated responses for factual accuracy, prompt adherence, and logical coherence using established frameworks. I collaborated with QA leads, performed RLHF tasks, and maintained detailed annotation logs ensuring high inter-rater reliability. • Labelled diverse datasets using Appen, Remotasks, Surge AI, and Labelbox software. • Assessed AI responses against the 3H (helpfulness, harmlessness, honesty) rubric and flagged ambiguity patterns. • Regularly contributed medical and scientific expertise to annotation projects, especially in healthcare and pharmacology. • Reduced annotation inconsistency rates through calibration sessions with QA leads.