For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
Abdulrazak Morenikeji

Abdulrazak Morenikeji

AI Trainer | LLM Specialist | Automation Engineer | Python & Workflow Expert

NIGERIA flag
Ibadan, Nigeria
$50.00/hrIntermediate

Key Skills

Software

No software listed

Top Subject Matter

AI/LLM Security Testing and Adversarial Evaluation

Top Data Types

TextText
Computer Code ProgrammingComputer Code Programming

Top Task Types

Red Teaming

Freelancer Overview

I am an Offensive Security Engineer with over 8 years of experience in web application penetration testing and AI security research. My work in AI red teaming and adversarial evaluation includes testing LLM-integrated systems for prompt injection, unsafe tool access, and data exfiltration risks. I specialize in building automated testing workflows, creating reproducible security assessments, and providing actionable remediation guidance. My expertise extends to designing and executing secure AI evaluation protocols, simulating real-world attack scenarios, and performing adversarial testing on AI-driven workflows. This experience allows me to contribute to AI training and evaluation projects with precision, ensuring the integrity and robustness of AI outputs. Data Labeling / AI Training Experience AI Security Red Teamer and Adversarial Evaluator Conducted adversarial testing on LLM systems to identify vulnerabilities and unsafe behavior. Developed automated scripts and frameworks to simulate prompt injections and data exfiltration scenarios. Designed AI testing workflows to validate system reliability and improve AI model safety. Assisted in generating structured evaluation data from AI outputs for security assessment purposes.

IntermediateEnglish

Labeling Experience

AI Security Red Teamer and Adversarial Evaluator

TextRed Teaming
I conducted adversarial red teaming and security evaluation of LLM-integrated systems, focusing on assessing AI behavior against prompts designed to expose vulnerabilities. This included evaluating prompt injection, trust boundary bypassing, and simulated data exfiltration in text-processing AI/LLM architectures. I developed and used automated payload testing workflows and structured reporting for effective LLM adversarial assessment. • Performed prompt injection testing on text-based AI/LLM systems • Simulated unsafe tool access and data exfiltration via adversarial prompts • Developed internal research platforms for LLM security evaluation • Delivered evaluation reports with actionable remediation guidance.

I conducted adversarial red teaming and security evaluation of LLM-integrated systems, focusing on assessing AI behavior against prompts designed to expose vulnerabilities. This included evaluating prompt injection, trust boundary bypassing, and simulated data exfiltration in text-processing AI/LLM architectures. I developed and used automated payload testing workflows and structured reporting for effective LLM adversarial assessment. • Performed prompt injection testing on text-based AI/LLM systems • Simulated unsafe tool access and data exfiltration via adversarial prompts • Developed internal research platforms for LLM security evaluation • Delivered evaluation reports with actionable remediation guidance.

2018 - Present

Education

U

UoPeople

Bachelor of Science, Computer Science

Bachelor of Science
2024 - 2026

Work History

E

Exploit Lab

Offensive Security Engineer

Ibadan
2018 - Present