For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
Omogbolahan Okunola

Omogbolahan Okunola

AI Training and LLM Evaluation Specialist

USA flag
NEWARK, Usa
$30.00/hrIntermediateMercor

Key Skills

Software

MercorMercor

Top Subject Matter

AI Model Evaluation
Legal Services & Contract Review
Regulatory Compliance & Risk Analysis

Top Data Types

TextText
AudioAudio
DocumentDocument

Top Task Types

Evaluation Rating
Transcription
Text Summarization
Question Answering
Text Generation
Prompt Response Writing SFT
Red Teaming
Classification

Freelancer Overview

AI Training and LLM Evaluation Specialist. Brings 13+ years of professional experience across complex professional workflows, research, and quality-focused execution. Core strengths include Mercor. Education includes Bachelor of Science, City University of New York – College of Staten Island (2015). AI-training focus includes data types such as Text and labeling workflows including Evaluation and Rating.

IntermediateEnglish

Labeling Experience

Mercor

AI Training and LLM Evaluation Specialist

MercorText
I evaluate and train large language models by reviewing, comparing, and rating their responses across various AI platforms. My role includes the use of structured rubrics to assess instruction retention, inference coherence, specificity, atomicity, and verifiability. This process helps improve model reliability, accuracy, and quality of generated responses. • Conducted systematic evaluation of LLM responses using detailed rubrics • Ranked model outputs to identify strengths and weaknesses in reasoning and factuality • Generated prompt improvements and analyzed responses for hallucinations and errors • Worked with platforms including Alignerr, Handshake AI, and Mercor to benchmark AI models.

I evaluate and train large language models by reviewing, comparing, and rating their responses across various AI platforms. My role includes the use of structured rubrics to assess instruction retention, inference coherence, specificity, atomicity, and verifiability. This process helps improve model reliability, accuracy, and quality of generated responses. • Conducted systematic evaluation of LLM responses using detailed rubrics • Ranked model outputs to identify strengths and weaknesses in reasoning and factuality • Generated prompt improvements and analyzed responses for hallucinations and errors • Worked with platforms including Alignerr, Handshake AI, and Mercor to benchmark AI models.

Present

Aligner

TextEvaluation Rating
Worked as an AI data annotator and evaluator supporting the training and improvement of large language models. Tasks included reviewing AI-generated responses, applying structured rubrics to evaluate quality, accuracy, reasoning, and instruction-following, and labeling datasets used to improve model performance. Followed detailed annotation guidelines to ensure consistency and reliability across tasks. Contributed to projects focused on natural language understanding, prompt-response evaluation, and model behavior analysis while maintaining high quality standards and meeting task deadlines.

Worked as an AI data annotator and evaluator supporting the training and improvement of large language models. Tasks included reviewing AI-generated responses, applying structured rubrics to evaluate quality, accuracy, reasoning, and instruction-following, and labeling datasets used to improve model performance. Followed detailed annotation guidelines to ensure consistency and reliability across tasks. Contributed to projects focused on natural language understanding, prompt-response evaluation, and model behavior analysis while maintaining high quality standards and meeting task deadlines.

2025 - 2026

Education

C

City University of New York – College of Staten Island

Bachelor of Science, Accounting

Bachelor of Science
2013 - 2015

Work History

D

Drone Security

Site Supervisor

Staten Island
2022 - 2025
A

Amazon

Customer Service Associate

N/A
2018 - 2020