For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
Titus Ortese

Titus Ortese

AI Data Annotation Specialist - Technology & Internet

NIGERIA flag
Lagos, Nigeria
$5.00/hrIntermediateInternal Proprietary ToolingLabelboxMercor

Key Skills

Software

Internal/Proprietary Tooling
LabelboxLabelbox
MercorMercor

Top Subject Matter

No subject matter listed

Top Data Types

VideoVideo
AudioAudio

Top Label Types

Segmentation
Emotion Recognition
Action Recognition
Transcription
Object Detection
Evaluation Rating

Freelancer Overview

I am an AI data annotation and video OCR specialist with over a year of hands-on experience supporting the development of large language models and generative AI systems. My background includes accurately labeling, validating, and refining datasets across text, audio, image, and video formats, using platforms like Micro1 AI, CrowdCompute, and SRT. I have collaborated closely with AI researchers to enhance model understanding of tone, style, and conversational flow, and have consistently met high-volume annotation targets while maintaining top-tier accuracy. My expertise also covers video-based OCR, transcription editing, frame-by-frame video review, and prompt engineering, all while ensuring data quality and ethical compliance. I am dedicated to producing reliable, human-centered training data that advances natural language understanding and model performance.

IntermediateEnglish

Labeling Experience

Mercor

Task Segment Labeling Project

MercorVideoSegmentationAction Recognition
The Task Segment Labeling Project focuses on creating accurate, factual timeline labels for head-mounted camera footage. The main objective is to produce a 100% fact-based description of actions occurring in the video, ensuring that every segment of the footage is properly annotated without gaps. Labels are machine-generated and then human-reviewed for accuracy, with a target of 97% correctness. Annotators review videos and either validate or reject existing labels based on strict guidelines. A label must be rejected if it contains hallucinations (actions or objects that did not occur), timestamp errors greater than two seconds, or overlapping timeframes. Valid labels must be factual, appropriately broad, and within the acceptable timestamp tolerance. Each label must describe visible actions only, avoid assumptions, and omit uncertain object details such as brand or color.

The Task Segment Labeling Project focuses on creating accurate, factual timeline labels for head-mounted camera footage. The main objective is to produce a 100% fact-based description of actions occurring in the video, ensuring that every segment of the footage is properly annotated without gaps. Labels are machine-generated and then human-reviewed for accuracy, with a target of 97% correctness. Annotators review videos and either validate or reject existing labels based on strict guidelines. A label must be rejected if it contains hallucinations (actions or objects that did not occur), timestamp errors greater than two seconds, or overlapping timeframes. Valid labels must be factual, appropriately broad, and within the acceptable timestamp tolerance. Each label must describe visible actions only, avoid assumptions, and omit uncertain object details such as brand or color.

2025 - 2025

Video Editing – Object Removal Evaluation Project

Internal Proprietary ToolingVideoEmotion RecognitionObject Detection
The Video Editing – Object Removal Evaluation Project focuses on assessing the quality of AI-generated object removal in edited videos. The primary objective of the project is to compare the original “before” video, which contains a specified object, with one or more “after” videos where the object has been removed. The evaluator’s role is not to perform the edit but to critically assess how effectively the model removed the object and how natural the final result appears. The evaluation process begins by carefully reviewing the object description to clearly understand what should be removed. The evaluator then watches the original video to observe the object’s position, movement, surrounding environment, lighting, and timing. After that, each edited “after” video is reviewed and compared directly with the original to determine how well the removal was executed. The focus is strictly on visual content, as audio is ignored in this project.

The Video Editing – Object Removal Evaluation Project focuses on assessing the quality of AI-generated object removal in edited videos. The primary objective of the project is to compare the original “before” video, which contains a specified object, with one or more “after” videos where the object has been removed. The evaluator’s role is not to perform the edit but to critically assess how effectively the model removed the object and how natural the final result appears. The evaluation process begins by carefully reviewing the object description to clearly understand what should be removed. The evaluator then watches the original video to observe the object’s position, movement, surrounding environment, lighting, and timing. After that, each edited “after” video is reviewed and compared directly with the original to determine how well the removal was executed. The focus is strictly on visual content, as audio is ignored in this project.

2025 - 2025
Labelbox

Transcript Tagging

LabelboxAudioSegmentation
The Transcript Tagging and Audio Analysis Project is an audio-text alignment and verification initiative designed to ensure the accuracy, clarity, and contextual integrity of transcribed speech. The primary objective of the project is to compare audio recordings with their corresponding text transcripts and confirm whether the spoken content accurately matches the provided text for specific timestamped sections. In addition to verifying textual accuracy, the project also includes tone analysis as part of the analytical review process. Annotators carefully listen to audio clips and examine pre-segmented text linked to defined time intervals. Their task is to confirm that the words spoken in each section precisely match the text provided. They identify and flag discrepancies such as omitted words, added words, substitutions, mispronunciations, or unclear speech. Each segment is reviewed independently to ensure accurate alignment between the audio and the transcript.

The Transcript Tagging and Audio Analysis Project is an audio-text alignment and verification initiative designed to ensure the accuracy, clarity, and contextual integrity of transcribed speech. The primary objective of the project is to compare audio recordings with their corresponding text transcripts and confirm whether the spoken content accurately matches the provided text for specific timestamped sections. In addition to verifying textual accuracy, the project also includes tone analysis as part of the analytical review process. Annotators carefully listen to audio clips and examine pre-segmented text linked to defined time intervals. Their task is to confirm that the words spoken in each section precisely match the text provided. They identify and flag discrepancies such as omitted words, added words, substitutions, mispronunciations, or unclear speech. Each segment is reviewed independently to ensure accurate alignment between the audio and the transcript.

2025 - 2025

Video Rating V2 project - Micro1 Generalist

Internal Proprietary ToolingVideoSegmentationEmotion Recognition
The Video Rating V2 project focuses on evaluating AI-generated videos by comparing two candidate outputs created from a single user prompt. The main objective is to determine which video better satisfies the prompt using a holistic approach. Evaluators assess multiple aspects, including visual quality, audio quality, instruction following, realism, and overall user satisfaction, to support improvements in video generation models. The specific data labeling task involves claiming a task, reviewing the prompt, watching both videos carefully, and assigning a preference rating on a 1–7 scale. Annotators must provide short, independent justifications for each rating. Evaluation is guided by criteria such as model peculiarities, aesthetics, video and audio quality, instruction adherence, and naturalness versus AI-related artifacts.

The Video Rating V2 project focuses on evaluating AI-generated videos by comparing two candidate outputs created from a single user prompt. The main objective is to determine which video better satisfies the prompt using a holistic approach. Evaluators assess multiple aspects, including visual quality, audio quality, instruction following, realism, and overall user satisfaction, to support improvements in video generation models. The specific data labeling task involves claiming a task, reviewing the prompt, watching both videos carefully, and assigning a preference rating on a 1–7 scale. Annotators must provide short, independent justifications for each rating. Evaluation is guided by criteria such as model peculiarities, aesthetics, video and audio quality, instruction adherence, and naturalness versus AI-related artifacts.

2025 - 2025

Education

F

Federal University Birnin Kebbi

Bachelor of Science, Demography and Social Statistics

Bachelor of Science
2019 - 2023

Work History

N

National Population Commission

Data Entry Assistant

Yobe State
2023 - 2024