For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
F
Funmilayo Ige

Funmilayo Ige

AI Data annotator

Nigeria flagIkenne-Remo, Nigeria
$10.00/hrIntermediateMindriftTolokaTelus

Key Skills

Software

MindriftMindrift
TolokaToloka
TelusTelus
AppenAppen

Top Subject Matter

E-commerce-Product categorization and Customer support
Artificial Intelligent
Healthcare and Sciences

Top Data Types

ImageImage
TextText
VideoVideo

Top Task Types

Bounding BoxBounding Box
ClassificationClassification
Object DetectionObject Detection
Text GenerationText Generation
Evaluation/RatingEvaluation/Rating

Freelancer Overview

I am an experienced data labeler with a strong background in AI data annotation and content labeling across multiple platforms, including Toloka, Turing, Telus, and Leui AI. My work focuses on image and video annotation, dataset preparation, and quality assurance, ensuring high levels of accuracy and consistency to support machine learning models. I have developed a keen eye for detail, the ability to follow complex guidelines, and the adaptability to work efficiently in fast-paced, technology-driven environments. I am committed to delivering reliable, high-quality annotated data and continuously improving processes to enhance overall model performance.

IntermediateEnglish

Labeling Experience

Video labelling

VideoClassification
This project focuses on annotating and reviewing egocentric (first-person) videos that capture humans performing physical tasks in real-world environments. Each video is segmented into distinct action-based events, representing specific activities carried out by the camera wearer (ego). The overall goal is to produce high-quality, structured annotations that accurately describe human actions, object interactions, and temporal boundaries within each segment, enabling effective training of computer vision and activity recognition models. As a reviewer, I evaluated segment-level text annotations to ensure accuracy and consistency. My responsibilities included verifying and correcting action labels, identifying the primary activity performed by the ego, confirming the correct objects involved in each interaction, and ensuring timestamps precisely aligned with the start and end of each segment. I also ensured annotations adhered strictly to project guidelines, resolving ambiguities and maintaining uniform labeling standards across the dataset. I have reviewed and annotated over 500 video segments, demonstrating extensive experience with large-scale data annotation workflows and sustained consistency across diverse task scenarios. Quality assurance was maintained through strict guideline compliance, consistency checks, and attention to detail in action-object mapping and timestamp accuracy. Emphasis was placed on minimizing labeling errors, ensuring inter-annotator consistency, and maintaining high precision in segment boundaries to support reliable model training outcomes.

This project focuses on annotating and reviewing egocentric (first-person) videos that capture humans performing physical tasks in real-world environments. Each video is segmented into distinct action-based events, representing specific activities carried out by the camera wearer (ego). The overall goal is to produce high-quality, structured annotations that accurately describe human actions, object interactions, and temporal boundaries within each segment, enabling effective training of computer vision and activity recognition models. As a reviewer, I evaluated segment-level text annotations to ensure accuracy and consistency. My responsibilities included verifying and correcting action labels, identifying the primary activity performed by the ego, confirming the correct objects involved in each interaction, and ensuring timestamps precisely aligned with the start and end of each segment. I also ensured annotations adhered strictly to project guidelines, resolving ambiguities and maintaining uniform labeling standards across the dataset. I have reviewed and annotated over 500 video segments, demonstrating extensive experience with large-scale data annotation workflows and sustained consistency across diverse task scenarios. Quality assurance was maintained through strict guideline compliance, consistency checks, and attention to detail in action-object mapping and timestamp accuracy. Emphasis was placed on minimizing labeling errors, ensuring inter-annotator consistency, and maintaining high precision in segment boundaries to support reliable model training outcomes.

2026 - Present

Face annotation

ImageBounding Box
This project focused on single-person frame annotation, where each image was treated as an independent unit for accurate facial labeling. The main task involved identifying faces, adjusting bounding boxes when necessary, and assigning labels such as facial expression, head position, and eye status based strictly on visible features. I annotated over 1,000 image frames, demonstrating consistency and efficiency across a large dataset. Quality was maintained by carefully reviewing each image, ensuring precise bounding box placement, and strictly following annotation guidelines. Emphasis was placed on accuracy, consistency, and objective labeling to produce reliable and high-quality data outputs.

This project focused on single-person frame annotation, where each image was treated as an independent unit for accurate facial labeling. The main task involved identifying faces, adjusting bounding boxes when necessary, and assigning labels such as facial expression, head position, and eye status based strictly on visible features. I annotated over 1,000 image frames, demonstrating consistency and efficiency across a large dataset. Quality was maintained by carefully reviewing each image, ensuring precise bounding box placement, and strictly following annotation guidelines. Emphasis was placed on accuracy, consistency, and objective labeling to produce reliable and high-quality data outputs.

2026 - Present

Evaluating AI Prompt Responses

TextEvaluation Rating
This project, called ELO focused on evaluating and ranking AI-generated responses using an Elo preference framework to improve model performance. The primary scope involved comparing two responses (A and B) to a given prompt and determining which performed better based on clarity, accuracy, and usefulness. The core data labeling tasks included selecting the overall winner (or identifying ties/both poor responses), conducting quality checks on each response (instruction adherence, correctness, and helpfulness), and providing concise justifications for each decision. Each annotation required careful reading, critical evaluation, and consistency in judgment. The project size exceeded 100,000 labeled tasks, demonstrating substantial hands-on experience with large-scale evaluation workflows. To ensure high-quality outputs, strict quality measures were followed, including adherence to detailed annotation guidelines, consistency across similar tasks, and clear, evidence-based reasoning for each selection. Regular self-review and alignment with expected standards were maintained to minimize bias and improve labeling accuracy.

This project, called ELO focused on evaluating and ranking AI-generated responses using an Elo preference framework to improve model performance. The primary scope involved comparing two responses (A and B) to a given prompt and determining which performed better based on clarity, accuracy, and usefulness. The core data labeling tasks included selecting the overall winner (or identifying ties/both poor responses), conducting quality checks on each response (instruction adherence, correctness, and helpfulness), and providing concise justifications for each decision. Each annotation required careful reading, critical evaluation, and consistency in judgment. The project size exceeded 100,000 labeled tasks, demonstrating substantial hands-on experience with large-scale evaluation workflows. To ensure high-quality outputs, strict quality measures were followed, including adherence to detailed annotation guidelines, consistency across similar tasks, and clear, evidence-based reasoning for each selection. Regular self-review and alignment with expected standards were maintained to minimize bias and improve labeling accuracy.

2025 - Present

Education

F

Federal University of agriculture, Abeokuta, Nigeria

Masters, Biochemistry

Masters
2018 - 2024
O

Olabisi Onabanjo University, Ago-Iwoye, Nigeria

Bachelor of Science, Biochemistry

Bachelor of Science
2012 - 2016

Work History

O

O&A Academy

Chemistry Teacher

Ikenne
2021 - 2021