Analyst – Content and AI Evaluation data annotor
Reviewed and analyzed user-generated content for compliance and quality in multiple AI projects. Conducted evaluation of platform policies and community standards, identifying and flagging inappropriate material. Annotated and tagged data across various domains to support AI training and performance improvement. • Evaluated user content in text, images, audio, and video formats for policy compliance. • Designed and tested structured prompts for AI projects and performed QA testing. • Conducted assessment of AI-generated responses, focusing on clarity and logic. • Maintained accuracy and consistency in the annotation process and suggested process improvements.