Text Moderation
Accuracy 90% The goal of the project is to identify and classify inappropriate language within a gaming environment. The task involves labeling and categorizing text data so that the AI system can be trained to automatically detect such content in the future. The main objective is to prevent and filter out text that violates content policies related to hate speech, sexual content, sexual harassment, threats, wishing harm, and violent speech. Cinder is the name of the platform used for this tasks.