LLM fine tuning
Improving responses, multilingual adaptation and optimization.
Hire this AI Trainer
Sign in or create an account to invite AI Trainers to your job.
No subject matter listed
I've been working with large language models and content moderation for several years, with a strong focus on safety, ethical use, and user protection. At Mindy Support, I currently evaluate LLM behavior to improve output quality, flag unsafe or biased responses, and help align AI models with human values. I also design safety checks and collaborate with research teams on reinforcement learning (RLHF). Before that, at Telus International, I worked closely with policy enforcement to moderate content in Dutch, English and Spanish, focusing on serious issues like hate speech, child endangerment, and graphic content. It taught me not only how to apply strict guidelines but also how to recognize when to escalate harmful material quickly to the right teams. That experience helped shape my attention to cultural nuance, ethical judgment, and user well-being. At Thoth AI, I supported policy alignment by analyzing content issues and giving direct feedback to project managers, specially when sensitive or ambiguous situations arise.
Improving responses, multilingual adaptation and optimization.
Moderation of chat, posts, images and video according to the policy.
Education, Coding
Bachelor, International Business and Management
Infrastructure Technician
CM/Q&A Specialist, Dutch