For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
J

Joseph Ibiwoye

AI Content & Media Evaluator — Outlier

United Kingdom flagRemote, United Kingdom
$20.00/hrExpert

Key Skills

Software

No software listed

Top Subject Matter

Multimodal AI outputs
Educational and general AI use cases
AI tutor evaluation

Top Data Types

VideoVideo
TextText
ImageImage

Top Task Types

Prompt + Response Writing (SFT)Prompt + Response Writing (SFT)

Freelancer Overview

AI Content & Media Evaluator — Outlier. Brings 5+ years of professional experience across complex professional workflows, research, and quality-focused execution. Core strengths include Internal and Proprietary Tooling. Education includes Doctor of Philosophy, BPP University (2022) and Master of Science, University of Toronto (2019). AI-training focus includes data types such as Video and Text and labeling workflows including Evaluation, Rating, and Prompt + Response Writing (SFT).

ExpertEnglishYoruba

Labeling Experience

AI Content & Media Evaluator — Outlier

Video
As an AI Content & Media Evaluator at Outlier, I reviewed multimodal outputs including short-form video, audio, and images for model improvement. My tasks involved annotating inconsistencies, selecting preferred completions, and collaborating in benchmark alignment. I utilized detailed rubrics to ensure data quality and provided structured feedback for model refinement. • Evaluated video, audio, and image model outputs for perceptual and rubric-driven quality. • Annotated edge cases, inconsistencies, and ambiguous completions. • Collaborated with AI research teams on benchmark creation and alignment. • Selected preferred completions using structured criteria.

As an AI Content & Media Evaluator at Outlier, I reviewed multimodal outputs including short-form video, audio, and images for model improvement. My tasks involved annotating inconsistencies, selecting preferred completions, and collaborating in benchmark alignment. I utilized detailed rubrics to ensure data quality and provided structured feedback for model refinement. • Evaluated video, audio, and image model outputs for perceptual and rubric-driven quality. • Annotated edge cases, inconsistencies, and ambiguous completions. • Collaborated with AI research teams on benchmark creation and alignment. • Selected preferred completions using structured criteria.

2024 - Present

PhD Researcher — Education & Cognitive Systems for AI, BPP University

Text
During my PhD in Education & Cognitive Systems for AI at BPP University, I engaged in rubric creation, multimodal task annotation, and feedback alignment for AI tutor systems. My research focused on evaluating reasoning and logic skills in machine-generated outputs. I annotated text-based responses and aligned them with cognitive structures for AI evaluation. • Developed and applied evaluation rubrics for AI tutors. • Annotated multimodal reasoning tasks for assessment accuracy. • Provided detailed feedback to align machine performance with human reasoning. • Focused research on prompt engineering and logic modeling.

During my PhD in Education & Cognitive Systems for AI at BPP University, I engaged in rubric creation, multimodal task annotation, and feedback alignment for AI tutor systems. My research focused on evaluating reasoning and logic skills in machine-generated outputs. I annotated text-based responses and aligned them with cognitive structures for AI evaluation. • Developed and applied evaluation rubrics for AI tutors. • Annotated multimodal reasoning tasks for assessment accuracy. • Provided detailed feedback to align machine performance with human reasoning. • Focused research on prompt engineering and logic modeling.

2022 - Present

Prompt & Evaluation Designer — OpenLearnTech

TextPrompt Response Writing SFT
As a Prompt & Evaluation Designer at OpenLearnTech, I built complex prompts with specific constraints and designed evaluation rubrics for consumer-facing AI models. My responsibilities included comparative ratings, QA for annotation reliability, and LLM assessment. I contributed to quality control pipelines for model outputs. • Created detailed prompt templates for varied use cases. • Developed scoring rubrics for model output evaluation. • Performed comparative assessments for model response quality. • Supported annotation QA workflows ensuring consistency.

As a Prompt & Evaluation Designer at OpenLearnTech, I built complex prompts with specific constraints and designed evaluation rubrics for consumer-facing AI models. My responsibilities included comparative ratings, QA for annotation reliability, and LLM assessment. I contributed to quality control pipelines for model outputs. • Created detailed prompt templates for varied use cases. • Developed scoring rubrics for model output evaluation. • Performed comparative assessments for model response quality. • Supported annotation QA workflows ensuring consistency.

2023 - 2024

Education

U

University of Toronto

Master of Science, Data Science and Artificial Intelligence Model Evaluation

Master of Science
2017 - 2019
U

University of California, Berkeley

Bachelor of Arts, Cognitive Science and Computational Linguistics

Bachelor of Arts
2011 - 2015

Work History

F

Freelance

Instructional Editor and Math Content Specialist

Remote
2019 - 2023