For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
C
Chimdiebube Egereonu

Chimdiebube Egereonu

AI Red Teamer and Security Researcher (LLM Red Teaming and Prompt Attack)

Nigeria flagOwerri, Nigeria
$15.00/hrIntermediate

Key Skills

Software

No software listed

Top Subject Matter

AI/LLM Security
Adversarial ML
AI Security Evaluation

Top Data Types

TextText
DocumentDocument
Computer Code ProgrammingComputer Code Programming

Top Task Types

Red TeamingRed Teaming
Function CallingFunction Calling
Computer Programming/CodingComputer Programming/Coding
TranscriptionTranscription
Text GenerationText Generation
Question AnsweringQuestion Answering
Text SummarizationText Summarization
Evaluation/RatingEvaluation/Rating

Freelancer Overview

AI Red Teamer and Security Researcher (LLM Red Teaming and Prompt Attack). Brings 4+ years of professional experience across complex professional workflows, research, and quality-focused execution. Core strengths include Internal and Proprietary Tooling. Education includes Bachelor of Science, Federal University of Technology Owerri (2022). AI-training focus includes data types such as Text and labeling workflows including Red Teaming, Evaluation, and Rating.

IntermediateEnglish

Labeling Experience

AI Red Teamer and Security Researcher (LLM Red Teaming and Prompt Attack)

TextRed Teaming
Led red teaming and security testing of Large Language Models (LLMs) to identify vulnerabilities arising from model handling of adversarial prompts and malicious input. Authored formal reports on AI/LLM vulnerabilities, contributing to the improvement of models through adversarial interaction and prompt manipulation. Developed specialized prompt injection and jailbreaking sequences as part of independent and bug bounty-driven AI security research. • Conducted prompt injection and jailbreaking experiments targeting live LLM APIs. • Evaluated LLM responses against OWASP LLM and Agentic AI Top 10 frameworks. • Documented security findings and model behavior for AI model improvement. • Delivered feedback on model weaknesses to LLM maintainers and platforms.

Led red teaming and security testing of Large Language Models (LLMs) to identify vulnerabilities arising from model handling of adversarial prompts and malicious input. Authored formal reports on AI/LLM vulnerabilities, contributing to the improvement of models through adversarial interaction and prompt manipulation. Developed specialized prompt injection and jailbreaking sequences as part of independent and bug bounty-driven AI security research. • Conducted prompt injection and jailbreaking experiments targeting live LLM APIs. • Evaluated LLM responses against OWASP LLM and Agentic AI Top 10 frameworks. • Documented security findings and model behavior for AI model improvement. • Delivered feedback on model weaknesses to LLM maintainers and platforms.

2024 - Present

Independent Security Researcher (AI Model Evaluation/Rating)

Text
Conducted independent evaluation of AI models and APIs by submitting crafted adversarial prompts to assess susceptibility to security issues and data leakage. Systematically reported vulnerabilities and edge cases to relevant bug bounty programs and researchers for AI model hardening. Maintained detailed log of LLM interactions and response ratings as part of responsible disclosure. • Focused on API-driven AI models, including YC-backed LLM startups. • Assessed model security and output fidelity using adversarial QA. • Provided structured research notes and walkthroughs for responsible disclosure. • Submitted formal findings with recommended mitigations.

Conducted independent evaluation of AI models and APIs by submitting crafted adversarial prompts to assess susceptibility to security issues and data leakage. Systematically reported vulnerabilities and edge cases to relevant bug bounty programs and researchers for AI model hardening. Maintained detailed log of LLM interactions and response ratings as part of responsible disclosure. • Focused on API-driven AI models, including YC-backed LLM startups. • Assessed model security and output fidelity using adversarial QA. • Provided structured research notes and walkthroughs for responsible disclosure. • Submitted formal findings with recommended mitigations.

2023 - Present

Education

F

Federal University of Technology Owerri

Bachelor of Science, Cybersecurity

Bachelor of Science
2022

Work History

R

Rootmaze Security Research

Founder & AI Security Researcher

Owerri
2024 - Present
I

Independent Security Researcher

Bug Bounty Hunter & Security Researcher

Owerri
2023 - Present