For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
Chinenye Chukwu

Chinenye Chukwu

AI Data Annotator

Nigeria flagRemote, Nigeria
$20.00/hrExpertCVATImeritLabel Studio

Key Skills

Software

CVATCVAT
iMeritiMerit
Label StudioLabel Studio
MercorMercor
Micro1
MindriftMindrift
OneFormaOneForma

Top Subject Matter

AI Training & Quality Analysis
LLM / AI Data Annotation & Evaluation
LLM Conversation Quality Assessment

Top Data Types

TextText
ImageImage
VideoVideo

Top Task Types

Classification
Prompt Response Writing SFT
Bounding Box
Polygon
Segmentation
Entity Ner Classification
Text Generation
Text Summarization
Object Detection
RLHF
Fine Tuning
Evaluation Rating
Transcription
Polyline
Point Key Point

Freelancer Overview

AI Evaluator & Business Analyst | Turing. Brings 7+ years of professional experience across complex professional workflows, research, and quality-focused execution experience at Mercor, Turing etc. Core strengths include Internal, and Proprietary Tooling. Education includes Bachelor of Engineering, University of Nigeria, Nsukka. AI-training focus includes data types such as Text and labeling workflows including Evaluation, Rating, and Classification.

ExpertEnglishIgbo

Labeling Experience

Mercor

AI Training Analyst | Mercor

MercorText
As an AI Training Analyst, I performed in-depth conversational AI journey evaluations using project-specific rubrics and systematic side-by-side comparisons across Google AI Mode, ChatGPT, and similar LLMs. My contribution included SFT prompt creation for model fine-tuning, and granular documentation of edge cases and response failures. I consistently provided high-quality feedback that directly improved AI model training outcomes. • Evaluated multi-turn LLM conversations for coherence, helpfulness, and instruction adherence.• Identified and documented model errors such as off-topic responses and hallucinations.• Produced detailed qualitative and quantitative reports to support model improvement.• Collaborated with clients to enhance LLM output through prompt engineering and data collection.

As an AI Training Analyst, I performed in-depth conversational AI journey evaluations using project-specific rubrics and systematic side-by-side comparisons across Google AI Mode, ChatGPT, and similar LLMs. My contribution included SFT prompt creation for model fine-tuning, and granular documentation of edge cases and response failures. I consistently provided high-quality feedback that directly improved AI model training outcomes. • Evaluated multi-turn LLM conversations for coherence, helpfulness, and instruction adherence.• Identified and documented model errors such as off-topic responses and hallucinations.• Produced detailed qualitative and quantitative reports to support model improvement.• Collaborated with clients to enhance LLM output through prompt engineering and data collection.

2026 - Present
Mercor

AI Evaluator & Business Analyst | Turing

MercorText
As an AI Evaluator, I conducted structured side-by-side evaluations of large language model (LLM) outputs for factual accuracy, completeness, and contextual relevance. I utilized established evaluation rubrics and frameworks to maintain consistency and reduce inter-rater variance in quality scoring. Comprehensive written justifications and structured reports were synthesized from evidence-based analysis of model outputs. • Compared and rated GHD AI outputs for multiple quality factors weekly.• Validated model claims through fact-checking and deep research.• Detected edge cases, instruction drift, and weak inference patterns.• Leveraged browser-based and command-line tools to support large-scale evaluation workflows.

As an AI Evaluator, I conducted structured side-by-side evaluations of large language model (LLM) outputs for factual accuracy, completeness, and contextual relevance. I utilized established evaluation rubrics and frameworks to maintain consistency and reduce inter-rater variance in quality scoring. Comprehensive written justifications and structured reports were synthesized from evidence-based analysis of model outputs. • Compared and rated GHD AI outputs for multiple quality factors weekly.• Validated model claims through fact-checking and deep research.• Detected edge cases, instruction drift, and weak inference patterns.• Leveraged browser-based and command-line tools to support large-scale evaluation workflows.

2026 - Present

Business Analyst & AI Product Manager | ChantUp

TextClassification
As Business Analyst & AI Product Manager, I structured, reviewed, and refined annotated datasets used for AI-driven chatbot training. My focus was on classifying user intents, improving annotation accuracy, and enabling precise response logic for live chatbot deployments. Structured quality reviews, error analyses, and guideline updates were applied in each AI training cycle. • Enhanced AI training data by reducing annotation ambiguity and boosting consistency.• Refined intent classification workflows, raising output quality by 30%.• Conducted QA audits and flagged edge cases to optimize training data coverage.• Authored and implemented annotation SOPs and updated labeling guidelines.

As Business Analyst & AI Product Manager, I structured, reviewed, and refined annotated datasets used for AI-driven chatbot training. My focus was on classifying user intents, improving annotation accuracy, and enabling precise response logic for live chatbot deployments. Structured quality reviews, error analyses, and guideline updates were applied in each AI training cycle. • Enhanced AI training data by reducing annotation ambiguity and boosting consistency.• Refined intent classification workflows, raising output quality by 30%.• Conducted QA audits and flagged edge cases to optimize training data coverage.• Authored and implemented annotation SOPs and updated labeling guidelines.

2024 - 2025

AI Annotation Quality Framework

TextClassification
In the AI Annotation Quality Framework initiative, I developed structured guidelines and rubrics to standardize intent annotation and reviewer consistency for an AI-powered system. I identified annotation error patterns and recommended targeted improvements for data precision and reviewer workflow. The framework resulted in improved labelling standards and minimized inter-annotator inconsistency. • Authored guidelines covering 8+ intent categories for team annotation accuracy.• Created standardized feedback templates and evaluation rubrics for reliability.• Identified and addressed recurring labeling errors and workflow bottlenecks.• Enhanced subsequent data labeling cycles by reducing rework and boosting precision.

In the AI Annotation Quality Framework initiative, I developed structured guidelines and rubrics to standardize intent annotation and reviewer consistency for an AI-powered system. I identified annotation error patterns and recommended targeted improvements for data precision and reviewer workflow. The framework resulted in improved labelling standards and minimized inter-annotator inconsistency. • Authored guidelines covering 8+ intent categories for team annotation accuracy.• Created standardized feedback templates and evaluation rubrics for reliability.• Identified and addressed recurring labeling errors and workflow bottlenecks.• Enhanced subsequent data labeling cycles by reducing rework and boosting precision.

2024 - 2024

BeepSafe AI Chatbot QA & Knowledge Base Annotation

TextClassification
For the BeepSafe AI Chatbot QA & Knowledge Base Annotation project, I designed and refined annotation workflows for user intent classification and structured knowledge base content. This involved multiple QA and test cycles, with detailed identification and correction of response issues. The overall chatbot interaction performance and reliability were improved before product launch. • Classified user intents across 8+ categories for accurate knowledge representation.• Executed iterative annotation reviews to reduce rejection rates and improve outputs.• Documented 12+ critical annotation errors and updated guidelines for quality gains.• Delivered comprehensive MVP QA using structured, benchmark-driven testing.

For the BeepSafe AI Chatbot QA & Knowledge Base Annotation project, I designed and refined annotation workflows for user intent classification and structured knowledge base content. This involved multiple QA and test cycles, with detailed identification and correction of response issues. The overall chatbot interaction performance and reliability were improved before product launch. • Classified user intents across 8+ categories for accurate knowledge representation.• Executed iterative annotation reviews to reduce rejection rates and improve outputs.• Documented 12+ critical annotation errors and updated guidelines for quality gains.• Delivered comprehensive MVP QA using structured, benchmark-driven testing.

2024 - 2024

Education

U

University of Nigeria, Nsukka

Bachelor of Engineering, Bioresources Engineering

Bachelor of Engineering
Not specified

Work History

I

I-Train Africa

CRM & Workflow Automation Specialist

Lagos
2022 - 2024
C

Chivacious Digital Hub Enterprises

Product Operations & Quality Lead

Lagos
2018 - 2022