For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
C

Christian Gonoh

Data Annotator — Freelancer

Nigeria flagLagos, Nigeria
ExpertOneforma

Key Skills

Software

OneFormaOneForma

Top Subject Matter

AI-generated content evaluation and safety
AI language model evaluation
Text classification and annotation

Top Data Types

TextText

Top Task Types

ClassificationClassification

Freelancer Overview

Data Annotator — Freelancer. Brings 5+ years of professional experience across complex professional workflows, research, and quality-focused execution. Core strengths include OneForma, N, and A. Education includes Bachelor of Science, University of Benin (2024). AI-training focus includes data types such as Text and labeling workflows including Evaluation, Rating, and Classification.

Expert

Labeling Experience

OneForma

Data Annotator — Freelancer

OneformaText
As a Data Annotator at OneForma AI (Centific), I evaluated and labeled a diverse array of AI-generated text outputs across multiple task types. Projects included response quality rating, text classification, content safety labeling, and instruction-following verification. Work directly contributed to RLHF (Reinforcement Learning from Human Feedback) pipelines and model improvement initiatives. • Maintained high annotation accuracy across unfamiliar domains and project switches. • Rapidly learned and applied new guidelines for specialty annotation tasks. • Systematically flagged borderline content and articulated detailed evaluation rationale. • Collaborated asynchronously to uphold batch quality and team consistency.

As a Data Annotator at OneForma AI (Centific), I evaluated and labeled a diverse array of AI-generated text outputs across multiple task types. Projects included response quality rating, text classification, content safety labeling, and instruction-following verification. Work directly contributed to RLHF (Reinforcement Learning from Human Feedback) pipelines and model improvement initiatives. • Maintained high annotation accuracy across unfamiliar domains and project switches. • Rapidly learned and applied new guidelines for specialty annotation tasks. • Systematically flagged borderline content and articulated detailed evaluation rationale. • Collaborated asynchronously to uphold batch quality and team consistency.

2025 - Present

Instruction-Following Review — AI Practice Project

Text
Performed Instruction-Following Review as an AI practice exercise, comparing AI outputs to original prompts to assess guideline adherence. Noted specific gaps in format, length, and content scope for each instance. Developed a structured review process modeled after industry annotation best practices. • Reviewed pairs of prompts and model responses for compliance. • Documented deviations and summarized common instruction-following issues. • Focused on evaluating both content completeness and appropriate format. • Strengthened attention to annotation detail through targeted error analysis.

Performed Instruction-Following Review as an AI practice exercise, comparing AI outputs to original prompts to assess guideline adherence. Noted specific gaps in format, length, and content scope for each instance. Developed a structured review process modeled after industry annotation best practices. • Reviewed pairs of prompts and model responses for compliance. • Documented deviations and summarized common instruction-following issues. • Focused on evaluating both content completeness and appropriate format. • Strengthened attention to annotation detail through targeted error analysis.

2024 - 2024

Text Classification Practice — AI Project

TextClassification
Executed Text Classification Practice as an AI project, sorting batches of text samples by topic, tone, and content type. Applied consistent labeling across test datasets and self-reviewed for annotation accuracy. This project simulated professional text classification in real-world annotation workflows. • Categorized each text sample based on guidelines for topic and tone. • Maintained clear records of categorized outputs to support accuracy checks. • Used standardized labels to ensure consistency across the dataset. • Conducted self-audits of label accuracy and guideline adherence.

Executed Text Classification Practice as an AI project, sorting batches of text samples by topic, tone, and content type. Applied consistent labeling across test datasets and self-reviewed for annotation accuracy. This project simulated professional text classification in real-world annotation workflows. • Categorized each text sample based on guidelines for topic and tone. • Maintained clear records of categorized outputs to support accuracy checks. • Used standardized labels to ensure consistency across the dataset. • Conducted self-audits of label accuracy and guideline adherence.

2024 - 2024

LLM Response Quality Evaluation — AI Practice Project

Text
Independently conducted LLM Response Quality Evaluation as an AI practice project, reviewing AI responses for helpfulness, accuracy, safety, and format. Documented structured scoring notes and provided comparative rankings of model outputs. This exercise enhanced my ability to discern nuanced differences in LLM-generated text through structured quality review. • Rated multiple AI-generated completions per prompt, assessing each for content quality and safety. • Provided detailed explanations for high and low ratings using consistent criteria. • Practiced comparative ranking and note-taking aligned to RLHF project standards. • Self-reviewed evaluation consistency to replicate professional annotation standards.

Independently conducted LLM Response Quality Evaluation as an AI practice project, reviewing AI responses for helpfulness, accuracy, safety, and format. Documented structured scoring notes and provided comparative rankings of model outputs. This exercise enhanced my ability to discern nuanced differences in LLM-generated text through structured quality review. • Rated multiple AI-generated completions per prompt, assessing each for content quality and safety. • Provided detailed explanations for high and low ratings using consistent criteria. • Practiced comparative ranking and note-taking aligned to RLHF project standards. • Self-reviewed evaluation consistency to replicate professional annotation standards.

2024 - 2024

Education

U

University of Benin

Bachelor of Science, Economics and Statistics

Bachelor of Science
2020 - 2024

Work History

A

Applied Worldwide

Guest Contributor

Lagos
2021 - 2025