Content Review, Moderation, and Data Quality Assurance
Reviewed and labeled user-generated and AI-generated content according to safety, compliance, and quality guidelines. Identified policy violations, categorized sensitive content, and evaluated responses for alignment with platform standards. Conducted quality assurance checks to detect labeling inconsistencies and improve dataset reliability. Supported AI safety efforts by flagging edge cases and contributing to guideline refinements.