For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
Felix Abonchez

Felix Abonchez

Senior AI Content Evaluator - Natural Language Processing

USA flag
ALABAMA, Usa
$35.00/hrExpertImeritRedbrick AIScale AI

Key Skills

Software

iMeritiMerit
Redbrick AIRedbrick AI
Scale AIScale AI
Surge AISurge AI

Top Subject Matter

No subject matter listed

Top Data Types

ImageImage
TextText

Top Label Types

Text Generation
Text Summarization
RLHF
Data Collection

Freelancer Overview

I am an experienced AI and NLP specialist with a strong background in data labeling, content evaluation, and training data quality for machine learning systems. My work has focused on developing and leading frameworks for assessing and annotating AI-generated text, ensuring accuracy, coherence, and contextual relevance at scale. I have hands-on expertise with Python, Java, and TypeScript, as well as advanced knowledge of ML frameworks such as TensorFlow, PyTorch, and Hugging Face Transformers. My experience spans leading teams in building evaluation protocols, automating annotation workflows, and publishing research on quality assessment metrics for large language models. I am passionate about improving the reliability of AI through robust data annotation and have contributed to both open-source NLP libraries and patented content evaluation methodologies.

ExpertEnglish

Labeling Experience

Scale AI

Large-Scale Evaluation and Annotation of AI-Generated Text for Quality and Safety

Scale AITextText GenerationText Summarization
Conducted large-scale text data labeling and evaluation projects focused on AI-generated content quality, safety, and alignment. Responsibilities included annotating and rating model outputs for coherence, factual accuracy, contextual relevance, bias, and harmful content across multiple NLP tasks such as question answering, summarization, and prompt–response generation. Designed and applied detailed annotation guidelines, performed multi-pass quality checks, and contributed to RLHF pipelines by generating high-quality prompts and reference responses. Project scale exceeded millions of text samples, with strict inter-annotator agreement thresholds and automated validation workflows to ensure consistency and reliability.

Conducted large-scale text data labeling and evaluation projects focused on AI-generated content quality, safety, and alignment. Responsibilities included annotating and rating model outputs for coherence, factual accuracy, contextual relevance, bias, and harmful content across multiple NLP tasks such as question answering, summarization, and prompt–response generation. Designed and applied detailed annotation guidelines, performed multi-pass quality checks, and contributed to RLHF pipelines by generating high-quality prompts and reference responses. Project scale exceeded millions of text samples, with strict inter-annotator agreement thresholds and automated validation workflows to ensure consistency and reliability.

2020 - 2023

Education

U

University of Illinois at Chicago

Doctor of Philosophy, Computer Science

Doctor of Philosophy
2014 - 2017
S

Stanford University

Master of Science, Computer Science

Master of Science
2012 - 2014

Work History

G

Google

Senior AI Content Evaluator

Chicago
2020 - Present
M

Microsoft

Machine Learning Engineer

ALABAMA
2017 - 2019