For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
J

Josha Unumen

Freelance AI Data Annotator & Evaluator

Nigeria flagAbuja, Nigeria
$18.00/hrIntermediateRemotasksAppenToloka

Key Skills

Software

RemotasksRemotasks
AppenAppen
TolokaToloka
iMeritiMerit

Top Subject Matter

General AI/ML Training Data
LLM/AI Response Evaluation
Text Data Annotation for Machine Learning

Top Data Types

TextText
AudioAudio
ImageImage

Top Task Types

Classification

Freelancer Overview

Freelance AI Data Annotator & Evaluator. Brings 3+ years of professional experience across complex professional workflows, research, and quality-focused execution. Core strengths include Remotasks, Appen, and Toloka. Education includes Bachelor of Engineering, Ambrose Ali University (2023). AI-training focus includes data types such as Text and labeling workflows including Evaluation, Rating, and Classification.

IntermediateEnglish

Labeling Experience

Appen

AI Quality Assurance & Content Review Project Reviewer

AppenText
In an AI Quality Assurance & Content Review Project, I reviewed AI outputs to confirm compliance with quality standards and identify dataset errors. My work involved thorough assessment and feedback on system performance, focusing on finding inconsistencies and areas for improvement. This project supported ongoing efforts to enhance the accuracy and reliability of AI systems and datasets. • Conducted thorough reviews of AI-generated text for guideline compliance • Reported errors and quality issues for remediation and system improvements • Provided actionable feedback to assist in refinement of AI outputs • Contributed to higher overall system reliability and dataset quality

In an AI Quality Assurance & Content Review Project, I reviewed AI outputs to confirm compliance with quality standards and identify dataset errors. My work involved thorough assessment and feedback on system performance, focusing on finding inconsistencies and areas for improvement. This project supported ongoing efforts to enhance the accuracy and reliability of AI systems and datasets. • Conducted thorough reviews of AI-generated text for guideline compliance • Reported errors and quality issues for remediation and system improvements • Provided actionable feedback to assist in refinement of AI outputs • Contributed to higher overall system reliability and dataset quality

2025 - Present
Toloka

Data Annotation & Content Labeling Project Annotator

TolokaTextClassification
During a Data Annotation & Content Labeling Project, I annotated datasets in various text formats to support machine learning models. I ensured that data was consistently labeled and conformed strictly to guidelines to enhance downstream model performance. My attention to detail contributed to the reliability of the datasets used for supervised training. • Labeled diverse text content for AI model training with a focus on accuracy • Applied detailed annotation procedures as specified by platform protocols • Maintained consistency through multiple stages of the project • Ensured rigorous guideline adherence for optimum dataset usability

During a Data Annotation & Content Labeling Project, I annotated datasets in various text formats to support machine learning models. I ensured that data was consistently labeled and conformed strictly to guidelines to enhance downstream model performance. My attention to detail contributed to the reliability of the datasets used for supervised training. • Labeled diverse text content for AI model training with a focus on accuracy • Applied detailed annotation procedures as specified by platform protocols • Maintained consistency through multiple stages of the project • Ensured rigorous guideline adherence for optimum dataset usability

2025 - Present
Appen

AI Response Evaluation & Ranking Project Contributor

AppenText
I participated in an AI Response Evaluation & Ranking Project where I assessed AI-generated outputs for accuracy, relevance, and instruction compliance. In this project, I improved dataset consistency through structured evaluation and detailed feedback. This experience enhanced the overall quality of responses used in AI training initiatives. • Evaluated LLM responses against provided rubrics for multiple judgment categories • Provided structured rationale for each evaluation to improve downstream use • Contributed to the development of high-quality, consistent datasets for supervised learning • Helped refine prompt compliance and response quality guidelines

I participated in an AI Response Evaluation & Ranking Project where I assessed AI-generated outputs for accuracy, relevance, and instruction compliance. In this project, I improved dataset consistency through structured evaluation and detailed feedback. This experience enhanced the overall quality of responses used in AI training initiatives. • Evaluated LLM responses against provided rubrics for multiple judgment categories • Provided structured rationale for each evaluation to improve downstream use • Contributed to the development of high-quality, consistent datasets for supervised learning • Helped refine prompt compliance and response quality guidelines

2025 - Present
Remotasks

Freelance AI Data Annotator & Evaluator

RemotasksText
As a freelance AI Data Annotator & Evaluator, I annotated and labeled large-scale datasets including text, audio, and image formats to support machine learning models. My work involved ranking AI-generated responses, maintaining high quality standards, and providing rationale for label choices. I also identified edge cases and ambiguities in guidelines to improve future annotation projects. • Applied supervised learning pipelines with careful data annotation across multiple modalities • Ranked and compared AI responses for accuracy, relevance, and safety using strict rubrics • Provided structured label rationales to enhance dataset reliability for AI teams • Flagged ambiguous cases and contributed feedback to enhance annotation standards

As a freelance AI Data Annotator & Evaluator, I annotated and labeled large-scale datasets including text, audio, and image formats to support machine learning models. My work involved ranking AI-generated responses, maintaining high quality standards, and providing rationale for label choices. I also identified edge cases and ambiguities in guidelines to improve future annotation projects. • Applied supervised learning pipelines with careful data annotation across multiple modalities • Ranked and compared AI responses for accuracy, relevance, and safety using strict rubrics • Provided structured label rationales to enhance dataset reliability for AI teams • Flagged ambiguous cases and contributed feedback to enhance annotation standards

2025 - Present

Education

A

Ambrose Ali University

Bachelor of Engineering, Electrical and Electronics Engineering

Bachelor of Engineering
2016 - 2023

Work History

U

uTest

QA Tester

Abuja
2024 - Present