For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
Jaime Deonarain

Jaime Deonarain

Data Labeling Specialist (Computer Vision Focus)

USA flag
New York, Usa
$30.00/hrExpertScale AIAppen

Key Skills

Software

Scale AIScale AI
AppenAppen

Top Subject Matter

Autonomous Vehicles
Computer Vision
Search Relevance

Top Data Types

ImageImage
TextText
3D Sensor
AudioAudio
DocumentDocument

Top Task Types

Bounding Box
Classification
Segmentation
Transcription

Freelancer Overview

Data Labeling Specialist (Computer Vision Focus). Brings 2+ years of professional experience across legal operations, contract review, compliance, and structured analysis. Core strengths include Scale AI, Appen, and N. Education includes Bachelor of Science, University at Albany, SUNY (2021). AI-training focus includes data types such as Image, Text, and 3D Sensor and labeling workflows including Bounding Box, Classification, and Segmentation.

ExpertEnglish

Labeling Experience

Scale AI

Data Labeling Specialist (Computer Vision Focus)

Scale AIImageBounding Box
As a Data Labeling Specialist at Scale AI, I labeled images and 3D point cloud data for autonomous driving models. I maintained a 98.5% QA pass rate and developed reference guides for boundary consistency. This role required spotting edge cases and upholding high throughput and accuracy standards. • Daily annotation of 200–300 images including 3D sensor data and occluded scenes • Flagged over 400 unusual edge cases for guideline updates • Created visual annotation references for new team members • Ranked in top 10% of labelers over six months for speed and quality

As a Data Labeling Specialist at Scale AI, I labeled images and 3D point cloud data for autonomous driving models. I maintained a 98.5% QA pass rate and developed reference guides for boundary consistency. This role required spotting edge cases and upholding high throughput and accuracy standards. • Daily annotation of 200–300 images including 3D sensor data and occluded scenes • Flagged over 400 unusual edge cases for guideline updates • Created visual annotation references for new team members • Ranked in top 10% of labelers over six months for speed and quality

2022 - Present
Appen

Search Relevance Annotator (Text Labeling)

AppenTextClassification
As a Search Relevance Annotator on the Yukon Project with Appen, I labeled and classified search queries for intent, satisfaction, and spam. My agreement rate with gold test sets was consistently high. I contributed to updating annotation guidelines and participated in calibration meetings. • Labeled over 12,000 search queries using a 7-point relevance scale • Maintained a 96% agreement rate with gold standards • Participated in weekly calls to resolve ambiguous cases • Helped refine annotation documentation with practical examples

As a Search Relevance Annotator on the Yukon Project with Appen, I labeled and classified search queries for intent, satisfaction, and spam. My agreement rate with gold test sets was consistently high. I contributed to updating annotation guidelines and participated in calibration meetings. • Labeled over 12,000 search queries using a 7-point relevance scale • Maintained a 96% agreement rate with gold standards • Participated in weekly calls to resolve ambiguous cases • Helped refine annotation documentation with practical examples

2021 - 2022

Junior Data Labeling Assistant (Contract)

TextClassification
As a Junior Data Labeling Assistant at Metro Insights LLC, I labeled chat log data for sentiment and issue category. This involved reviewing and correcting previous annotations for quality. I also developed tracking tools to support team throughput management. • Labeled customer chat logs for sentiment (positive/negative/neutral) • Categorized issues as billing, technical, or general • Reviewed 1,500 records to catch and correct mislabels • Designed an Excel tracker for daily labeling performance

As a Junior Data Labeling Assistant at Metro Insights LLC, I labeled chat log data for sentiment and issue category. This involved reviewing and correcting previous annotations for quality. I also developed tracking tools to support team throughput management. • Labeled customer chat logs for sentiment (positive/negative/neutral) • Categorized issues as billing, technical, or general • Reviewed 1,500 records to catch and correct mislabels • Designed an Excel tracker for daily labeling performance

2020 - 2021

Audio Transcription Cleanup

AudioTranscription
For Audio Transcription Cleanup, I reviewed and corrected 800 short audio clips from call center conversations. The task included removing filler words, marking speakers, and fixing overlapping speech for clarity. Transcript accuracy improved from 89% to 94% after my rework. • Audited and improved transcripts for 800 audio clips • Identified and split overlapping speech into clear speaker turns • Removed verbal fillers to increase accuracy • Applied quality checks to elevate overall transcript standards

For Audio Transcription Cleanup, I reviewed and corrected 800 short audio clips from call center conversations. The task included removing filler words, marking speakers, and fixing overlapping speech for clarity. Transcript accuracy improved from 89% to 94% after my rework. • Audited and improved transcripts for 800 audio clips • Identified and split overlapping speech into clear speaker turns • Removed verbal fillers to increase accuracy • Applied quality checks to elevate overall transcript standards

Not specified

LLM Safety Labeling

Text
In LLM Safety Labeling, I ranked 500 model responses according to helpfulness and harmlessness using a rubric with six edge-case categories. My labeled rankings contributed directly to tuning a chatbot’s reward model. The work focused on nuanced safety, medical, and political question review. • Evaluated model outputs for six distinct safety and edge-case types • Used structured ranking rubrics for fairness and repeatability • Fine-tuned chatbot model behavior based on label outputs • Labeled responses used in reinforcement learning from human feedback

In LLM Safety Labeling, I ranked 500 model responses according to helpfulness and harmlessness using a rubric with six edge-case categories. My labeled rankings contributed directly to tuning a chatbot’s reward model. The work focused on nuanced safety, medical, and political question review. • Evaluated model outputs for six distinct safety and edge-case types • Used structured ranking rubrics for fairness and repeatability • Fine-tuned chatbot model behavior based on label outputs • Labeled responses used in reinforcement learning from human feedback

Not specified

Education

U

University at Albany, SUNY

Bachelor of Science, Information Science

Bachelor of Science
2017 - 2021

Work History

M

Metro Insights

Junior Data Analyst

New York
2020 - 2021