Code Red Teaming
The goal of this project was to red team team a model to generate harmful code.
Hire this AI Trainer
Sign in or create an account to invite AI Trainers to your job.
No subject matter listed
I have hands-on experience in AI training and model evaluation through my work with AI companies, where I evaluated large language models by stress-testing their reasoning, safety, and robustness. This included intentionally inducing failure cases, identifying hallucinations, bias, and unsafe behaviors, and conducting red-teaming exercises to assess how models respond to adversarial prompts and edge cases. I provided structured, high-quality feedback to improve model performance, alignment, and safety across a wide range of scenarios in both English and Spanish. My background in cybersecurity, chemistry, and data science sets me apart by bringing a strong analytical and adversarial mindset to AI training tasks. I am skilled at breaking down complex outputs, validating factual and logical consistency, and applying security and scientific rigor to annotation and evaluation workflows.
The goal of this project was to red team team a model to generate harmful code.
The task was to create prompts that require a model to input and process one or multiple different files and produce one or multiple functional file outputs.
Certification, Cybersecurity
Certification, Computer and Data Science
Offensive Security Operator
Laboratory Assistant