For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
G

Gbohunmi Oredipe

AI Data Annotator – Computer Vision & Image Segmentation

Nigeria flagLagos, Nigeria
$10.00/hrIntermediateRoboflowCVATLabel Studio

Key Skills

Software

RoboflowRoboflow
CVATCVAT
Label StudioLabel Studio
Other

Top Subject Matter

Computer Vision / Image Annotation
Autonomous Systems / Robotics
Agriculture / Smart Farming

Top Data Types

ImageImage
AudioAudio
Computer Code ProgrammingComputer Code Programming

Top Task Types

Bounding Box
Segmentation
Classification
Object Detection
Fine Tuning
Data Collection
Computer Programming Coding

Freelancer Overview

Electrical Engineering Intern. Brings 1+ years of professional experience across complex professional workflows, research, and quality-focused execution. Education includes Bachelor of Science, Federal University of Agriculture, Abeokuta (2023).

IntermediateEnglish

Labeling Experience

Chicken Segmentation

ImageSegmentation
This project is focused on developing a low-cost computer vision system for estimating the weight of poultry animals without relying on 3D depth sensors. Instead of using depth data, the approach is based on image segmentation to isolate the chickens from the background and extract useful visual features for weight estimation. The system is designed to capture images from three different camera angles, which are then combined and used as input to a lightweight CNN model. To make this effective, the current phase of the project is centered on building a reliable segmentation pipeline. I have been involved in preparing and organizing the dataset for training the segmentation model, which will be used to accurately separate the chickens from irrelevant background elements such as the platform, waste, and surrounding environment. The goal of this segmentation step is to generate clean masks from each camera view so that only the relevant regions (the chickens) are fed into the model, improving accuracy while keeping the system computationally efficient. This is especially important given the low-cost constraint and the need for a lightweight model that can run in practical settings. At the moment, the project is still in progress, with initial model training underway to validate the segmentation approach before moving on to full weight estimation.

This project is focused on developing a low-cost computer vision system for estimating the weight of poultry animals without relying on 3D depth sensors. Instead of using depth data, the approach is based on image segmentation to isolate the chickens from the background and extract useful visual features for weight estimation. The system is designed to capture images from three different camera angles, which are then combined and used as input to a lightweight CNN model. To make this effective, the current phase of the project is centered on building a reliable segmentation pipeline. I have been involved in preparing and organizing the dataset for training the segmentation model, which will be used to accurately separate the chickens from irrelevant background elements such as the platform, waste, and surrounding environment. The goal of this segmentation step is to generate clean masks from each camera view so that only the relevant regions (the chickens) are fed into the model, improving accuracy while keeping the system computationally efficient. This is especially important given the low-cost constraint and the need for a lightweight model that can run in practical settings. At the moment, the project is still in progress, with initial model training underway to validate the segmentation approach before moving on to full weight estimation.

2026 - Present

Privacy Protection

ImageSegmentation
I worked on a computer vision project focused on privacy protection, where I was responsible for creating a high-quality annotated dataset for training a segmentation model. I manually annotated 1,000 images by generating pixel-level binary masks (0–255) that isolate upper-body clothing such as shirts and tops. To ensure precision, I used GIMP to carefully trace tight boundaries around each clothing item, paying attention to details like folds, sleeves, and partially occluded areas. Since consistency was important, I followed a clear annotation approach across all images, making sure only the relevant clothing regions were included while excluding skin, accessories, and background elements. I also organized the dataset into structured training, testing, and validation sets, with corresponding image–mask pairs to support model development. Beyond annotation, I contributed to basic preprocessing using Python and OpenCV, including image normalization and data augmentation to improve the overall quality and robustness of the dataset. The annotated data was later used in training a segmentation model, which achieved strong performance (reported at around 97%), confirming the effectiveness and consistency of the annotations.

I worked on a computer vision project focused on privacy protection, where I was responsible for creating a high-quality annotated dataset for training a segmentation model. I manually annotated 1,000 images by generating pixel-level binary masks (0–255) that isolate upper-body clothing such as shirts and tops. To ensure precision, I used GIMP to carefully trace tight boundaries around each clothing item, paying attention to details like folds, sleeves, and partially occluded areas. Since consistency was important, I followed a clear annotation approach across all images, making sure only the relevant clothing regions were included while excluding skin, accessories, and background elements. I also organized the dataset into structured training, testing, and validation sets, with corresponding image–mask pairs to support model development. Beyond annotation, I contributed to basic preprocessing using Python and OpenCV, including image normalization and data augmentation to improve the overall quality and robustness of the dataset. The annotated data was later used in training a segmentation model, which achieved strong performance (reported at around 97%), confirming the effectiveness and consistency of the annotations.

2025 - 2026

Beans shaft detector

ImageBounding Box
This project was part of a school assignment focused on developing a bean-picking device. The initial objective was to detect and classify different species of beans, but the scope was later refined to detecting bean shafts to simplify the problem and make it more practical for implementation. I collected images of different bean species and built the dataset from scratch. To increase the dataset size and improve variability, I applied augmentation techniques, which expanded the dataset to about 3,000 images. I then uploaded and organized the dataset on Roboflow, where I prepared it for annotation. Using Roboflow’s bounding box annotation tools, I labeled multiple classes, specifically beans and bean shafts, ensuring consistency across images despite variations in lighting, orientation, and background. After completing the annotation process, I exported the dataset in YOLO format for model training. The annotated dataset was then used in developing a detection model as part of the bean-picking system, contributing to the overall goal of enabling accurate identification of relevant parts of the plant.

This project was part of a school assignment focused on developing a bean-picking device. The initial objective was to detect and classify different species of beans, but the scope was later refined to detecting bean shafts to simplify the problem and make it more practical for implementation. I collected images of different bean species and built the dataset from scratch. To increase the dataset size and improve variability, I applied augmentation techniques, which expanded the dataset to about 3,000 images. I then uploaded and organized the dataset on Roboflow, where I prepared it for annotation. Using Roboflow’s bounding box annotation tools, I labeled multiple classes, specifically beans and bean shafts, ensuring consistency across images despite variations in lighting, orientation, and background. After completing the annotation process, I exported the dataset in YOLO format for model training. The annotated dataset was then used in developing a detection model as part of the bean-picking system, contributing to the overall goal of enabling accurate identification of relevant parts of the plant.

2025 - 2025

Education

F

Federal University of Agriculture, Abeokuta

Bachelor of Science, Mechatronics Engineering

Bachelor of Science
2023

Work History

E

ETL Engineering

Junior PLC automation expert

Lagos
2026 - Present
O

Oshea Projects Limited

Electrical Engineering Intern

Lagos
2025 - 2025