For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
Abolaji Ojebode

Abolaji Ojebode

Manual Software Tester - Technology & Internet

UNITED_KINGDOM flag
Manchester, United Kingdom
$15.00/hrExpertAppenData Annotation TechMercor

Key Skills

Software

AppenAppen
Data Annotation TechData Annotation Tech
MercorMercor
OneFormaOneForma

Top Subject Matter

No subject matter listed

Top Data Types

DocumentDocument
ImageImage
TextText

Top Label Types

Bounding Box
Classification
Evaluation Rating
Prompt Response Writing SFT
RLHF

Freelancer Overview

I have hands on experience working on AI training and data labeling projects across platforms such as CrowdGen, OneForma, and Outlier. On CrowdGen, I contributed to projects like Rincon, Uolo, and Ogden, where I focused on accurate data annotation, guideline compliance, and quality checks. On OneForma, I worked on projects including Bumblebee, Lighthouse, and Lightspeed, reviewing and labeling text and other datasets based on strict project rules. On Outlier, I supported Project Aether, where I evaluated and refined AI generated outputs to improve model performance. These roles required close attention to detail, consistency, and the ability to apply complex instructions correctly across large volumes of data. My background in manual software testing strengthens my work in AI training data. I am used to validating outputs against defined requirements, spotting inconsistencies, and documenting issues clearly. I work comfortably with structured guidelines, quality metrics, and review feedback. I also use SQL for basic data checks when needed. This mix of QA discipline and real project experience across multiple labeling platforms allows me to deliver accurate, reliable datasets that support strong AI model performance.

ExpertEnglish

Labeling Experience

Appen

Hermes

AppenTextClassificationRLHF
Project Hermes was a large scale evaluation programme for Artificial Intelligence (AI) created to evaluate and enhance the performance of generative models across real world business use cases. The project processed tens of thousands of input output pairs each week, with global teams of people pouring through huge volumes of data on a daily basis. In that context, I assessed the quality of model responses based on in-depth considerations of the HEMC guidelines, and evaluated helpfulness, correctness, language quality, faithfulness to the source text, risks of hallucinations, and correct/incorrect disclaimer/completion handling. Each unit called for structured reasoning, close reading of raw text and adherence to defined rubrics. Quality control was also rigorous, with the inclusion of gold standard benchmarking, blind double reviews, inter rater agreement monitoring, audit sampling, and enforced accuracy thresholds.

Project Hermes was a large scale evaluation programme for Artificial Intelligence (AI) created to evaluate and enhance the performance of generative models across real world business use cases. The project processed tens of thousands of input output pairs each week, with global teams of people pouring through huge volumes of data on a daily basis. In that context, I assessed the quality of model responses based on in-depth considerations of the HEMC guidelines, and evaluated helpfulness, correctness, language quality, faithfulness to the source text, risks of hallucinations, and correct/incorrect disclaimer/completion handling. Each unit called for structured reasoning, close reading of raw text and adherence to defined rubrics. Quality control was also rigorous, with the inclusion of gold standard benchmarking, blind double reviews, inter rater agreement monitoring, audit sampling, and enforced accuracy thresholds.

2024 - 2025

Education

S

Sheffield Hallam University

Master of Science, Public Health

Master of Science
2024 - 2025
K

Kwara State University

Bachelor of Science, Public Health

Bachelor of Science
2017 - 2022

Work History

W

Wizcore IT Global

Manual Software Tester

Manchester
2025 - 2025