For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
Anthony Lucey

Anthony Lucey

AI Trainer - Data Annotation & Evaluation

USA flag
CALIFORNIA, Usa
$20.00/hrIntermediateAppenClickworkerMindrift

Key Skills

Software

AppenAppen
ClickworkerClickworker
MindriftMindrift
OneFormaOneForma
SamaSama
TolokaToloka
TelusTelus

Top Subject Matter

No subject matter listed

Top Data Types

Computer Code ProgrammingComputer Code Programming
ImageImage
Medical DicomMedical Dicom
TextText

Top Label Types

Evaluation Rating
Prompt Response Writing SFT
Classification
Computer Programming Coding
Data Collection
Transcription

Freelancer Overview

I am a detail-oriented AI training data specialist with hands-on experience in data annotation, content review, and AI response evaluation. My background includes annotating diverse datasets—text, image, and video—for companies like Appen and Telus International, where I ensured high-quality, bias-free data and achieved over 95% accuracy in quality assurance. I am skilled in prompt engineering, semantic labeling, and providing structured feedback to improve AI model performance. My technical toolkit includes basic Python (Pandas, NumPy), MS Office, Google Workspace, and AI platforms such as ChatGPT, Gemini, and Claude. I have also led independent projects in prompt testing and dataset labeling, gaining practical knowledge in NLP, sentiment analysis, and image classification. I am passionate about supporting the development of reliable AI systems through meticulous data work and continuous learning.

IntermediateEnglish

Labeling Experience

Appen

Freelance Data Annotator

AppenTextEvaluation Rating
As a freelance Data Annotator at Appen, I evaluated AI-generated data for accuracy, relevance, and overall quality. The work involved detailed annotation of text, images, and videos, adhering strictly to client guidelines and quality metrics. I collaborated regularly with managers to enhance annotation standards and ensure high-quality outcomes. • Performed extensive annotation tasks on text, image, and video datasets. • Identified and flagged biases or inconsistencies in AI-generated content. • Maintained productivity and 95%+ quality assurance standards. • Provided input to refine and improve annotation protocols.

As a freelance Data Annotator at Appen, I evaluated AI-generated data for accuracy, relevance, and overall quality. The work involved detailed annotation of text, images, and videos, adhering strictly to client guidelines and quality metrics. I collaborated regularly with managers to enhance annotation standards and ensure high-quality outcomes. • Performed extensive annotation tasks on text, image, and video datasets. • Identified and flagged biases or inconsistencies in AI-generated content. • Maintained productivity and 95%+ quality assurance standards. • Provided input to refine and improve annotation protocols.

2024
Toloka

Dataset Labeling Practice

TolokaImageClassificationComputer Programming Coding
The Dataset Labeling Practice project included hands-on labeling of images and text for use in AI training scenarios. Topic areas covered image classification, sentiment analysis, emotion recognition, and intent classification. Platforms such as Amazon Mechanical Turk and Labelbox were used for certification and project completion. • Completed practical assignments in image classification using online platforms. • Labeled text data sets for emotion and intent classification tasks. • Gained firsthand experience with Labelbox and Amazon Mechanical Turk. • Practiced skills in classification and annotation for AI model development.

The Dataset Labeling Practice project included hands-on labeling of images and text for use in AI training scenarios. Topic areas covered image classification, sentiment analysis, emotion recognition, and intent classification. Platforms such as Amazon Mechanical Turk and Labelbox were used for certification and project completion. • Completed practical assignments in image classification using online platforms. • Labeled text data sets for emotion and intent classification tasks. • Gained firsthand experience with Labelbox and Amazon Mechanical Turk. • Practiced skills in classification and annotation for AI model development.

2022
Toloka

AI Prompt Testing Initiative

TolokaTextPrompt Response Writing SFT
The AI Prompt Testing Initiative project focused on designing, testing, and evaluating prompts for various AI platforms. The goal was to analyze model behavior, response bias, and factual accuracy under different prompt structures. Findings were documented to develop a comprehensive guide to best prompting practices. • Designed and tested over 50 prompts on ChatGPT and Google Bard. • Analyzed AI responses for structure, bias, and correctness. • Compiled documented results for research and process improvement. • Developed reference material for best prompt engineering techniques.

The AI Prompt Testing Initiative project focused on designing, testing, and evaluating prompts for various AI platforms. The goal was to analyze model behavior, response bias, and factual accuracy under different prompt structures. Findings were documented to develop a comprehensive guide to best prompting practices. • Designed and tested over 50 prompts on ChatGPT and Google Bard. • Analyzed AI responses for structure, bias, and correctness. • Compiled documented results for research and process improvement. • Developed reference material for best prompt engineering techniques.

2020
Telus

Content Reviewer

TelusTextEvaluation Rating
As a Content Reviewer at Telus International, I assessed AI-generated outputs for factual correctness, linguistic fluency, and compliance with project standards. Structured feedback was provided to support the improvement of AI model responses and efficacy. I actively participated in project guideline refresher trainings to stay current with evolving annotation requirements. • Rated AI model responses for accuracy and appropriateness. • Delivered structured feedback to help refine AI model performance. • Consistently achieved daily and weekly target numbers for reviews. • Updated evaluation skills through frequent training sessions.

As a Content Reviewer at Telus International, I assessed AI-generated outputs for factual correctness, linguistic fluency, and compliance with project standards. Structured feedback was provided to support the improvement of AI model responses and efficacy. I actively participated in project guideline refresher trainings to stay current with evolving annotation requirements. • Rated AI model responses for accuracy and appropriateness. • Delivered structured feedback to help refine AI model performance. • Consistently achieved daily and weekly target numbers for reviews. • Updated evaluation skills through frequent training sessions.

2023 - 2023

Education

S

San Bernardino Valley College

Associate of Science, Computer Science

Associate of Science
2018 - 2021

Work History

A

APPEN

Freelance Data Annotator

CALIFORNIA
2024 - Present
T

Telus International

Content Reviewer

TEXAS
2023 - Present