For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
Abiola Olaleye

Abiola Olaleye

Experienced Python Developer/AI Trainer

NIGERIA flag
Osogbo, Nigeria
$35.00/hrExpertMindriftOtherDon T Disclose

Key Skills

Software

MindriftMindrift
Other
Don't disclose

Top Subject Matter

No subject matter listed

Top Data Types

Computer Code ProgrammingComputer Code Programming
DocumentDocument
ImageImage
TextText
VideoVideo

Top Label Types

Action Recognition
Bounding Box
Classification
Computer Programming Coding
Data Collection
Evaluation Rating
Object Detection
Point Key Point
Prompt Response Writing SFT
Question Answering
Red Teaming
RLHF
Segmentation
Text Generation
Text Summarization

Freelancer Overview

I am an experienced Python developer and AI systems specialist with over 7 years of hands-on work in data processing, machine learning, and large-scale data annotation projects. My expertise includes debugging and optimizing ML pipelines for model training and evaluation, performing detailed root cause analysis on data preprocessing workflows, and delivering high-quality labeled datasets for AI applications. I have led teams in annotating and validating training data for LLMs, including tasks like robotic action annotation, prompt injection defense, and model output benchmarking for clarity, factual correctness, and style. My work spans domains such as computer vision (medical imaging, radiology reporting), NLP, and document processing, utilizing tools like Pandas, NumPy, Scikit-learn, TensorFlow, and PyTorch. I am skilled in designing annotation workflows, developing automation scripts for data extraction and labeling, and providing gold-standard reference outputs to improve model alignment and performance. My commitment to clean code, rigorous testing, and structured feedback ensures reliable and robust AI training data pipelines.

ExpertEnglish

Labeling Experience

MLE Code Debug Nodes

OtherComputer Code ProgrammingRLHFComputer Programming Coding
The project involved iteratively debugging and correcting Machine Learning Engineering and Data Science code sourced from Kaggle. The work focused on identifying logical errors, data leakage, incorrect assumptions, and implementation bugs, then fixing them step by step to produce correct, reproducible pipelines. Each debugging iteration was documented and labeled to train the model on recognizing common failure patterns and effective resolution strategies in real-world MLE workflows. The resulting dataset improves the model’s ability to analyze, debug, and refine ML and data science code autonomously.

The project involved iteratively debugging and correcting Machine Learning Engineering and Data Science code sourced from Kaggle. The work focused on identifying logical errors, data leakage, incorrect assumptions, and implementation bugs, then fixing them step by step to produce correct, reproducible pipelines. Each debugging iteration was documented and labeled to train the model on recognizing common failure patterns and effective resolution strategies in real-world MLE workflows. The resulting dataset improves the model’s ability to analyze, debug, and refine ML and data science code autonomously.

2025

Human Actions Annotation

OtherVideoAction Recognition
This project involved training an AI model using human action data extracted from videos. The workflow focused on interpreting the intent of each task shown in the videos, decomposing tasks into clear, ordered subtasks, and annotating each step with precise action descriptions. Detailed annotation guidelines were applied to ensure consistency, correctness, and exact correspondence between observed actions and labeled outputs. These structured annotations were used to teach the model how to recognize, reason about, and replicate robotic actions based on visual input. The result is higher-quality action understanding and improved downstream robotic policy learning.

This project involved training an AI model using human action data extracted from videos. The workflow focused on interpreting the intent of each task shown in the videos, decomposing tasks into clear, ordered subtasks, and annotating each step with precise action descriptions. Detailed annotation guidelines were applied to ensure consistency, correctness, and exact correspondence between observed actions and labeled outputs. These structured annotations were used to teach the model how to recognize, reason about, and replicate robotic actions based on visual input. The result is higher-quality action understanding and improved downstream robotic policy learning.

2025 - 2025

Robotic Actions Annotation

OtherVideoPoint Key PointSegmentation
The project focused on training an AI model using videos of robots performing actions. The work involved identifying the intent of each robotic task, breaking it down into ordered subtasks, and annotating exact robotic actions and state transitions observed in the videos. Detailed annotation instructions were followed to ensure high-fidelity, unambiguous labels aligned with the robot’s movements, interactions, and control logic. These annotations were used to train the model to accurately understand, generalize, and reproduce robotic action sequences from visual input.

The project focused on training an AI model using videos of robots performing actions. The work involved identifying the intent of each robotic task, breaking it down into ordered subtasks, and annotating exact robotic actions and state transitions observed in the videos. Detailed annotation instructions were followed to ensure high-fidelity, unambiguous labels aligned with the robot’s movements, interactions, and control logic. These annotations were used to train the model to accurately understand, generalize, and reproduce robotic action sequences from visual input.

2025 - 2025

Red Teaming

Don T DiscloseTextRed Teaming
The project focused on red teaming large language models to identify and stress-test security and robustness weaknesses. Activities included systematic testing for prompt injection, jailbreak techniques, data exfiltration risks, and other adversarial failure modes. Each discovered vulnerability was documented with reproducible attack paths, impact analysis, and categorized failure points. Findings were fed back to researchers to support patching, retraining, and safety mitigation, strengthening the model’s resistance to real-world misuse and adversarial behavior.

The project focused on red teaming large language models to identify and stress-test security and robustness weaknesses. Activities included systematic testing for prompt injection, jailbreak techniques, data exfiltration risks, and other adversarial failure modes. Each discovered vulnerability was documented with reproducible attack paths, impact analysis, and categorized failure points. Findings were fed back to researchers to support patching, retraining, and safety mitigation, strengthening the model’s resistance to real-world misuse and adversarial behavior.

2024 - 2025

General User Request Response Evaluation

OtherTextQuestion AnsweringText Generation
The project focused on evaluating model responses to normal user requests across a wide range of domains. Outputs were rated against defined quality dimensions including clarity, correctness, instruction adherence, hallucination risk, formatting, stylistic alignment, and completeness. Edge cases and ambiguous prompts were carefully assessed to distinguish partial compliance from full success. Structured ratings and qualitative feedback were used to surface systematic weaknesses and inform model tuning, alignment, and quality improvements.

The project focused on evaluating model responses to normal user requests across a wide range of domains. Outputs were rated against defined quality dimensions including clarity, correctness, instruction adherence, hallucination risk, formatting, stylistic alignment, and completeness. Edge cases and ambiguous prompts were carefully assessed to distinguish partial compliance from full success. Structured ratings and qualitative feedback were used to surface systematic weaknesses and inform model tuning, alignment, and quality improvements.

2024 - 2024

Education

F

Federal University of Technology, Minna, Niger State, Nigeria

Masters, Computer Science

Masters
2018 - 2021
L

Ladoke Akintola University of Technology

Bachelor of Technology, Computer Science

Bachelor of Technology
2007 - 2011

Work History

T

Turing

Senior Python Developer & Team Lead

Remote
2023 - Present
D

Dreambox Global-Tech

Founder & Python Developer

Osun State
2020 - 2023