GUMGUM
I review and analyze text content across a wide range of websites to identify and classify threatening or harmful concepts. My role involves carefully reading each article to determine the presence of threats (such as physical violence, terrorism, or self-harm) and classifying the content according to predefined categories and guidelines. When a threat is detected, I verify its specific type and context to ensure consistent labeling and accurate data quality. This project's purpose is to train and refine models that improve the safety and effectiveness of online advertising campaigns, ensuring that ads are not displayed alongside unsafe or harmful content.