For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
S

Simon Kabei

Senior Content Safety Evaluator

Kenya flagRemote, Kenya
$25.00/hrExpertData Annotation TechCVATCrowdsource

Key Skills

Software

Data Annotation TechData Annotation Tech
CVATCVAT
CrowdSourceCrowdSource
CrowdFlowerCrowdFlower
ClickworkerClickworker
Axiom AI
DataturkDataturk
DatatureDatature

Top Subject Matter

AI content safety
Community standards
Content moderation

Top Data Types

ImageImage
TextText
DocumentDocument

Top Task Types

Bounding BoxBounding Box
SegmentationSegmentation
PolygonPolygon
ClassificationClassification
Entity (NER) ClassificationEntity (NER) Classification
Point/Key PointPoint/Key Point
PolylinePolyline
CuboidCuboid
Object DetectionObject Detection

Freelancer Overview

Senior Content Safety Evaluator. Brings 7+ years of professional experience across legal operations, contract review, compliance, and structured analysis. Core strengths include Internal and Proprietary Tooling. Education includes Bachelor of Science, University of Florida (2019). AI-training focus includes data types such as Image and Text and labeling workflows including Evaluation and Rating.

ExpertEnglishSwahili

Labeling Experience

Senior Content Safety Evaluator

Image
As a Senior Content Safety Evaluator at SafeStream Digital, I analyzed a high volume of AI-generated images and text for policy compliance and safety. I reviewed and escalated harmful, abusive, or policy-violating content to improve AI model classification rules. I provided detailed documentation to enhance the moderation framework and support new evaluators. • Maintained a 98.7% accuracy score during evaluation of 400+ images and text items daily. • Escalated edge cases to refine model performance and reduce false positives/negatives. • Produced weekly reports to identify and mitigate emerging harmful content patterns. • Utilized internal/proprietary tooling and leading AI moderation APIs for comprehensive analysis.

As a Senior Content Safety Evaluator at SafeStream Digital, I analyzed a high volume of AI-generated images and text for policy compliance and safety. I reviewed and escalated harmful, abusive, or policy-violating content to improve AI model classification rules. I provided detailed documentation to enhance the moderation framework and support new evaluators. • Maintained a 98.7% accuracy score during evaluation of 400+ images and text items daily. • Escalated edge cases to refine model performance and reduce false positives/negatives. • Produced weekly reports to identify and mitigate emerging harmful content patterns. • Utilized internal/proprietary tooling and leading AI moderation APIs for comprehensive analysis.

2022 - Present

Content Moderation Specialist

Image
As a Content Moderation Specialist with TrustGuard Media Solutions, I moderated user-generated social media, video, and e-commerce content with a primary focus on image and multimedia evaluation. I identified manipulated images, deepfakes, and misleading media requiring nuanced human analysis. I contributed to improving evaluation frameworks and trend detection methods for harmful content. • Applied community guidelines to resolve complex, ambiguous content moderation cases. • Authored internal guides that improved evaluator consistency and policy alignment. • Monitored and flagged emerging visual content threats in real time. • Collaborated using both proprietary annotation tools and standard moderation platforms.

As a Content Moderation Specialist with TrustGuard Media Solutions, I moderated user-generated social media, video, and e-commerce content with a primary focus on image and multimedia evaluation. I identified manipulated images, deepfakes, and misleading media requiring nuanced human analysis. I contributed to improving evaluation frameworks and trend detection methods for harmful content. • Applied community guidelines to resolve complex, ambiguous content moderation cases. • Authored internal guides that improved evaluator consistency and policy alignment. • Monitored and flagged emerging visual content threats in real time. • Collaborated using both proprietary annotation tools and standard moderation platforms.

2020 - 2021

Digital Safety Analyst (Internship)

Text
During my internship at OpenMind Labs as a Digital Safety Analyst, I labeled and annotated AI chatbot outputs for safety, bias, and policy compliance. I supported the research team by building datasets of harmful content categories for AI content detection model fine-tuning. I also conducted comprehensive reviews on digital threats for model improvement. • Evaluated AI-generated textual data in a structured annotation environment. • Assisted in dataset creation to enhance AI safety and detection accuracy. • Applied systematic policy and bias review to labeled data. • Used internal data annotation tools and documented results for research use.

During my internship at OpenMind Labs as a Digital Safety Analyst, I labeled and annotated AI chatbot outputs for safety, bias, and policy compliance. I supported the research team by building datasets of harmful content categories for AI content detection model fine-tuning. I also conducted comprehensive reviews on digital threats for model improvement. • Evaluated AI-generated textual data in a structured annotation environment. • Assisted in dataset creation to enhance AI safety and detection accuracy. • Applied systematic policy and bias review to labeled data. • Used internal data annotation tools and documented results for research use.

2019 - 2020

Education

U

University of Florida

Bachelor of Science, Psychology

Bachelor of Science
2015 - 2019

Work History

S

SafeStream Digital

Senior Content Safety Evaluator

Remote
2022 - Present
T

TrustGuard Media Solutions

Content Moderation Specialist

Remote
2020 - 2021