AI Data Annotator
Contributed to large-scale AI model development by reviewing and annotating diverse datasets including text, image, audio, and video content. The project focused on improving AI system performance, safety, and accuracy through high-quality human evaluation and structured feedback. Responsibilities included labeling and categorizing training data, evaluating AI-generated responses for accuracy and policy compliance, identifying bias and safety risks, and flagging ambiguous or harmful content. Applied detailed annotation guidelines to ensure consistency across datasets while meeting strict quality and productivity benchmarks. Provided structured feedback on edge cases and model errors to enhance model alignment, reliability, and ethical standards. Maintained confidentiality and adhered to data privacy requirements throughout the project lifecycle.