For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
O
Oluwaseun Adeleye

Oluwaseun Adeleye

AI Data Annotator & Search Quality Evaluator

Nigeria flagFCT, Nigeria
$25.00/hrEntry LevelScale AITelusCVAT

Key Skills

Software

Scale AIScale AI
TelusTelus
CVATCVAT
LabelboxLabelbox
LabelImgLabelImg
Label StudioLabel Studio
OpenCV AI Kit (OAK)OpenCV AI Kit (OAK)

Top Subject Matter

Legal Services & Contract Review
Regulatory Compliance & Risk Analysis
Legal Research & Document Analysis

Top Data Types

ImageImage
TextText
VideoVideo
DocumentDocument

Top Task Types

Action RecognitionAction Recognition
Audio RecordingAudio Recording
Bounding BoxBounding Box
ClassificationClassification
Entity (NER) ClassificationEntity (NER) Classification
Evaluation/RatingEvaluation/Rating
Prompt + Response Writing (SFT)Prompt + Response Writing (SFT)
Question AnsweringQuestion Answering
Text GenerationText Generation
Text SummarizationText Summarization
TranscriptionTranscription

Freelancer Overview

AI Data Annotator & Search Quality Evaluator. Core strengths include Internal and Proprietary Tooling. AI-training focus includes data types such as Text and labeling workflows including Evaluation, Rating, and Classification.

Entry LevelEnglish

Labeling Experience

AI Data Annotator & Search Quality Evaluator

Text
As an AI Data Annotator & Search Quality Evaluator at Rovco, I assessed search engine results and annotated text-based content for relevance, intent, and quality. I used structured rating scales and decision trees to categorize results and flag inconsistencies, producing reliable ground-truth labels. My work supported AI training and validation while ensuring data confidentiality at all stages. • Evaluated SERPs and text outputs for relevance and accuracy. • Applied complex guidelines to multi-factor annotation and labeling tasks. • Flagged policy violations, misinformation, and low-quality outputs. • Produced consistent annotations enabling downstream AI improvements.

As an AI Data Annotator & Search Quality Evaluator at Rovco, I assessed search engine results and annotated text-based content for relevance, intent, and quality. I used structured rating scales and decision trees to categorize results and flag inconsistencies, producing reliable ground-truth labels. My work supported AI training and validation while ensuring data confidentiality at all stages. • Evaluated SERPs and text outputs for relevance and accuracy. • Applied complex guidelines to multi-factor annotation and labeling tasks. • Flagged policy violations, misinformation, and low-quality outputs. • Produced consistent annotations enabling downstream AI improvements.

2025 - Present
Scale AI

AI Image & Multimodal Output Evaluation (Generative AI)

Scale AIImageBounding Box
Performed image selection and post-processing using brush-based masking to remove undesired objects or artifacts from AI-generated images. Guided controlled regeneration of masked regions to produce visually consistent replacements, ensuring structural continuity, accurate context blending, and adherence to prompt constraints. Evaluated regenerated outputs against quality rubrics and rejected outputs with artifacts, inconsistencies, or prompt drift.

Performed image selection and post-processing using brush-based masking to remove undesired objects or artifacts from AI-generated images. Guided controlled regeneration of masked regions to produce visually consistent replacements, ensuring structural continuity, accurate context blending, and adherence to prompt constraints. Evaluated regenerated outputs against quality rubrics and rejected outputs with artifacts, inconsistencies, or prompt drift.

2025 - 2025

AI Output Reviewer & Text Annotation Specialist

TextClassification
As an AI Output Reviewer & Text Annotation Specialist at Integral Research, I reviewed and classified online content according to user needs, query intent, and project standards. I contributed to data validation by applying critical thinking and making defensible annotation decisions. My efforts ensured high-quality language data for use in AI development and testing. • Classified and annotated online text in alignment with set rubrics. • Assessed language quality, clarity, and contextual fit in text datasets. • Performed structured web research for validating content credibility. • Supported annotation and QA workflows for production AI systems.

As an AI Output Reviewer & Text Annotation Specialist at Integral Research, I reviewed and classified online content according to user needs, query intent, and project standards. I contributed to data validation by applying critical thinking and making defensible annotation decisions. My efforts ensured high-quality language data for use in AI development and testing. • Classified and annotated online text in alignment with set rubrics. • Assessed language quality, clarity, and contextual fit in text datasets. • Performed structured web research for validating content credibility. • Supported annotation and QA workflows for production AI systems.

2024 - 2025

QA Evaluator

Text
As a QA Evaluator at QualiTest, I assessed written responses, search results, and AI-generated text against linguistic and policy guidelines. I annotated data for errors in semantics, syntax, and accuracy while providing corrective feedback. My detailed evaluations improved content quality and model effectiveness for AI systems. • Conducted side-by-side comparisons of parallel AI outputs. • Identified and annotated deviations or inconsistencies in text data. • Collaborated with QA leads to refine and clarify evaluation criteria. • Documented recurring trends for model improvement initiatives.

As a QA Evaluator at QualiTest, I assessed written responses, search results, and AI-generated text against linguistic and policy guidelines. I annotated data for errors in semantics, syntax, and accuracy while providing corrective feedback. My detailed evaluations improved content quality and model effectiveness for AI systems. • Conducted side-by-side comparisons of parallel AI outputs. • Identified and annotated deviations or inconsistencies in text data. • Collaborated with QA leads to refine and clarify evaluation criteria. • Documented recurring trends for model improvement initiatives.

2024 - 2024

Content Labeling & Data Validation Analyst

TextClassification
As a Content Labeling & Data Validation Analyst at DataBuzz Ltd, I validated and categorized large text datasets for trends, errors, and quality assurance. I developed filtering criteria to improve data processing and provided feedback on annotation standards. My analysis enabled higher quality data models and more effective data pipelines. • Categorized and labeled text and conversational logs for analysis. • Ensured client data met stringent quality control measures. • Provided feedback to teams regarding annotation discrepancies. • Contributed to quality control and predictive modeling workflows.

As a Content Labeling & Data Validation Analyst at DataBuzz Ltd, I validated and categorized large text datasets for trends, errors, and quality assurance. I developed filtering criteria to improve data processing and provided feedback on annotation standards. My analysis enabled higher quality data models and more effective data pipelines. • Categorized and labeled text and conversational logs for analysis. • Ensured client data met stringent quality control measures. • Provided feedback to teams regarding annotation discrepancies. • Contributed to quality control and predictive modeling workflows.

2023 - 2024

Education

U

University of the West of England

Bachelor of Science, Computer Science

Bachelor of Science
2016 - 2019

Work History

Q

QualiTest

QualiTest

Bristol
2024 - 2024
N

New Icon

Business Intelligence Developer / Data Visualization Specialist

Bristol
2023 - 2023