For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
Jaylen Hester

Jaylen Hester

AI Model Evaluator | RLHF & Adversarial Testing Specialist | Data Analyst

USA flag
Hattiesburg, Usa
$20.00/hrIntermediateDon T DiscloseInternal Proprietary Tooling

Key Skills

Software

Don't disclose
Internal/Proprietary Tooling

Top Subject Matter

No subject matter listed

Top Data Types

Computer Code ProgrammingComputer Code Programming

Top Label Types

RLHF
Evaluation Rating
Computer Programming Coding
Question Answering
Text Generation
Prompt Response Writing SFT

Freelancer Overview

I am a data analyst with hands-on experience in designing automated data pipelines, conducting root-cause analysis, and transforming complex datasets into actionable insights. My expertise includes Python (Pandas, NumPy), SQL, and data visualization tools such as Tableau and Seaborn, which I have used to streamline reporting workflows and accelerate data processing. I have a strong background in statistical testing, data cleaning, and process optimization, and have independently led research projects focused on algorithmic bias detection and validation, including building Python pipelines for data annotation and bias measurement in language model outputs. My work demonstrates the ability to synthesize, validate, and document large volumes of data with accuracy and attention to detail, making me well-suited for roles in data labeling and AI training data preparation.

IntermediateEnglish

Labeling Experience

Project Aether

Internal Proprietary ToolingComputer Code ProgrammingQuestion AnsweringText Generation
As an AI Model Evaluator, I provided reinforcement learning from human feedback (RLHF) by rating and annotating LLM-generated outputs. The work focused on maintaining output quality and identifying weaknesses through adversarial prompting. I contributed to continuous improvement by delivering structured feedback aligned to evolving guidelines. • Evaluated and ranked natural language outputs for coherence and alignment. • Designed and executed adversarial prompts to test reasoning and robustness. • Maintained high standards under shifting project guidelines. • Supported model development through detailed rating systems and feedback.

As an AI Model Evaluator, I provided reinforcement learning from human feedback (RLHF) by rating and annotating LLM-generated outputs. The work focused on maintaining output quality and identifying weaknesses through adversarial prompting. I contributed to continuous improvement by delivering structured feedback aligned to evolving guidelines. • Evaluated and ranked natural language outputs for coherence and alignment. • Designed and executed adversarial prompts to test reasoning and robustness. • Maintained high standards under shifting project guidelines. • Supported model development through detailed rating systems and feedback.

2025

Project Helix

Don T DiscloseComputer Code ProgrammingRLHFEvaluation Rating
I served as an AI Logic & Code Evaluator, performing reinforcement learning from human feedback (RLHF) on large language models. My responsibilities included evaluating, ranking, and annotating AI-generated responses, as well as engineering adversarial prompts for system robustness. I created high-quality training data and rigorously fact-checked AI content for accuracy and safety. • Evaluated and ranked model outputs in both code and natural language domains. • Created and refined prompt-response training pairs to enhance model accuracy. • Engineered adversarial prompts to probe model limitations and biases. • Maintained quality benchmark adherence with thorough written feedback.

I served as an AI Logic & Code Evaluator, performing reinforcement learning from human feedback (RLHF) on large language models. My responsibilities included evaluating, ranking, and annotating AI-generated responses, as well as engineering adversarial prompts for system robustness. I created high-quality training data and rigorously fact-checked AI content for accuracy and safety. • Evaluated and ranked model outputs in both code and natural language domains. • Created and refined prompt-response training pairs to enhance model accuracy. • Engineered adversarial prompts to probe model limitations and biases. • Maintained quality benchmark adherence with thorough written feedback.

2025

Education

M

Merit America

Certificate in Data Analytics, Data Analytics

Certificate in Data Analytics
2025 - 2025
U

University of Helsinki (MOOC.fi)

Certificate in Python Programming, Python Programming

Certificate in Python Programming
2025 - 2025

Work History

M

Mississippi Tank Co.

Data Analyst

Hattiesburg
2025 - Present
S

State Farm Insurance

Claims Specialist – Property

Dallas
2022 - 2023