For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
Dezmonda Abbitt

Dezmonda Abbitt

AI Training Specialist - Data Annotation

USA flag
Los Angeles, Usa
$20.00/hrExpertLabelbox

Key Skills

Software

LabelboxLabelbox

Top Subject Matter

No subject matter listed

Top Data Types

ImageImage
VideoVideo

Top Label Types

Bounding Box
Polygon
Classification
Tracking
Point Key Point

Freelancer Overview

I am an experienced AI Training Specialist with over three years in data labeling and annotation, supporting projects in computer vision, speech recognition, and NLP. My background includes precise annotation of video, image, and audio datasets using tools like Labelbox, CVAT, Supervisely, and Amazon SageMaker Ground Truth. I have extensive hands-on experience with bounding box, polygon, semantic segmentation, and keypoint labeling, as well as audio transcription and sentiment tagging. I consistently maintain high accuracy standards through rigorous quality assurance and collaborate closely with machine learning engineers to refine guidelines and reduce bias. My technical skills in Python and data management, along with my commitment to optimizing datasets, help drive the development of robust and reliable AI models.

ExpertEnglishFrenchSpanish

Labeling Experience

Labelbox

Multi-Modal AI Data Annotation Specialist – Computer Vision & Speech Models

LabelboxVideoBounding BoxPoint Key Point
Led multi-modal data annotation projects supporting machine learning models in computer vision and speech recognition. Annotated 250,000+ images and video frames using bounding boxes, polygon segmentation, object tracking, and key point labeling for object detection and action recognition systems. Performed 5,000+ hours of audio transcription, speaker diarization, emotion recognition, and NLP entity tagging for LLM and conversational AI training. Maintained 98%+ annotation accuracy through structured QA audits, guideline compliance, inter-annotator agreement checks, and bias reduction practices. Collaborated closely with ML engineers to refine labeling taxonomies and improve dataset performance.

Led multi-modal data annotation projects supporting machine learning models in computer vision and speech recognition. Annotated 250,000+ images and video frames using bounding boxes, polygon segmentation, object tracking, and key point labeling for object detection and action recognition systems. Performed 5,000+ hours of audio transcription, speaker diarization, emotion recognition, and NLP entity tagging for LLM and conversational AI training. Maintained 98%+ annotation accuracy through structured QA audits, guideline compliance, inter-annotator agreement checks, and bias reduction practices. Collaborated closely with ML engineers to refine labeling taxonomies and improve dataset performance.

2024 - 2025
Labelbox

Multi-Modal AI Data Annotation Specialist – Computer Vision & Speech Models

LabelboxImageBounding BoxPolygon
Currently working on a large-scale video annotation project supporting computer vision and action recognition models. Responsible for frame-by-frame labeling of dynamic video datasets, including bounding box annotation, object tracking, polygon segmentation, and key point labeling for human pose estimation. Perform temporal action recognition tagging, identifying specific activities across sequences (e.g., walking, running, object interaction), and ensuring accurate start–end frame classification. Conduct multi-object tracking to maintain consistent ID assignment across frames. Project involves annotating over 120,000+ video frames with strict adherence to detailed labeling guidelines. Maintain 98%+ quality accuracy through internal QA reviews, consistency checks, and inter-annotator validation. Collaborate with machine learning engineers to refine annotation taxonomies, improve edge-case detection, and enhance dataset diversity to reduce bias. Focused on precision, consistency, and

Currently working on a large-scale video annotation project supporting computer vision and action recognition models. Responsible for frame-by-frame labeling of dynamic video datasets, including bounding box annotation, object tracking, polygon segmentation, and key point labeling for human pose estimation. Perform temporal action recognition tagging, identifying specific activities across sequences (e.g., walking, running, object interaction), and ensuring accurate start–end frame classification. Conduct multi-object tracking to maintain consistent ID assignment across frames. Project involves annotating over 120,000+ video frames with strict adherence to detailed labeling guidelines. Maintain 98%+ quality accuracy through internal QA reviews, consistency checks, and inter-annotator validation. Collaborate with machine learning engineers to refine annotation taxonomies, improve edge-case detection, and enhance dataset diversity to reduce bias. Focused on precision, consistency, and

2022 - 2024

Education

U

University of California, Irvine (Extension)

Certificate, Applied Data Science

Certificate
2023 - 2023
L

Liberty University

Bachelor of Science, Information Technology

Bachelor of Science
2022 - 2022

Work History

O

outlier

AI training expert

Los Angeles
2022 - 2024