Harmful Content & Safety Review
Reviewed AI-generated text for harmful, unsafe, or sensitive content, labeling it according to safety and policy guidelines. Evaluated outputs in multiple languages, providing translations and cultural adaptations to ensure accurate and context-appropriate responses. Combined content moderation with linguistic review to improve AI safety and multilingual understanding.