For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
Edward Cullen

Edward Cullen

Data & Administrative Specialist - AI & Content Evaluation

USA flag
New York, Usa
$20.00/hrExpertLabelbox

Key Skills

Software

LabelboxLabelbox

Top Subject Matter

No subject matter listed

Top Data Types

TextText

Top Label Types

Entity Ner Classification
Segmentation
Text Generation
RLHF
Evaluation Rating

Freelancer Overview

I am a detail-oriented data professional with hands-on experience in data annotation, content review, and supporting AI/ML model evaluation. My work involves accurately labeling both structured and unstructured data, following complex guidelines, and ensuring high-quality training datasets for AI projects. I am skilled in using CRM systems like Salesforce, as well as Microsoft Office tools for data management and reporting. My background includes collaborating with cross-functional teams to optimize workflows, conducting quality assurance checks, and preparing data summaries for operational decisions. With certifications in data annotation and AI analytics, a strong attention to detail, and a passion for narrative-driven content, I am committed to delivering reliable and consistent results that improve AI model performance and user experiences.

ExpertEnglishSpanishPortugueseTagalog

Labeling Experience

Labelbox

LLM Data Annotation & Text Classification Specialist

LabelboxTextEntity Ner ClassificationSegmentation
I worked on a large-scale AI training project focused on improving Large Language Model (LLM) performance through high-quality data annotation and evaluation. My responsibilities included labeling and classifying text data, performing Named Entity Recognition (NER), rating AI-generated responses based on accuracy, relevance, and safety, and writing structured prompt-response pairs for supervised fine-tuning (SFT). I also conducted RLHF-based evaluations to enhance model alignment and reduce bias. The project involved annotating over 15,000+ text samples across multiple domains including customer service, healthcare, and general knowledge. I strictly followed annotation guidelines to ensure consistency, maintained high inter-annotator agreement scores, and adhered to data privacy and confidentiality standards. Quality assurance processes included peer reviews, multi-stage validation, and feedback loops to continuously improve annotation accuracy and model output performance.

I worked on a large-scale AI training project focused on improving Large Language Model (LLM) performance through high-quality data annotation and evaluation. My responsibilities included labeling and classifying text data, performing Named Entity Recognition (NER), rating AI-generated responses based on accuracy, relevance, and safety, and writing structured prompt-response pairs for supervised fine-tuning (SFT). I also conducted RLHF-based evaluations to enhance model alignment and reduce bias. The project involved annotating over 15,000+ text samples across multiple domains including customer service, healthcare, and general knowledge. I strictly followed annotation guidelines to ensure consistency, maintained high inter-annotator agreement scores, and adhered to data privacy and confidentiality standards. Quality assurance processes included peer reviews, multi-stage validation, and feedback loops to continuously improve annotation accuracy and model output performance.

2022 - 2024

Education

H

HP FOUNDATION

CERTIFICATE IN CRITICAL THINKING IN THE AI ERA, AI

CERTIFICATE IN CRITICAL THINKING IN THE AI ERA
2024 - 2025
D

Daystar University

Certificate, Cybersecurity and Artificial Intelligence Analytics

Certificate
2021 - 2021

Work History

S

Scale AI

AI Data Annotator

New York
2021 - 2022