For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
Alexander Aguilar

Alexander Aguilar

JavaScript Developer — Handshake (Project: Helix)

USA flag
Las Vegas, NV, Usa
$40.00/hrIntermediateMercorOneformaCloudfactory

Key Skills

Software

MercorMercor
OneFormaOneForma
CloudFactoryCloudFactory
Data Annotation TechData Annotation Tech
LabelboxLabelbox
RemotasksRemotasks
Snorkel AISnorkel AI
Internal/Proprietary Tooling
Other

Top Subject Matter

Programming/Developer-focused Annotation
Full-stack Code Evaluation
Legal Services & Contract Review

Top Data Types

TextText
ImageImage
Computer Code ProgrammingComputer Code Programming
DocumentDocument

Top Task Types

Fine Tuning
Prompt Response Writing SFT
Classification
Bounding Box
Object Detection
Text Generation
Question Answering
Text Summarization
Computer Programming Coding
Data Collection
Evaluation Rating
Function Calling
Transcription

Freelancer Overview

JavaScript Developer — Handshake (Project: Helix). Core strengths include Other. AI-training focus includes data types such as Computer Code and Programming and labeling workflows including Computer Programming and Coding.

IntermediateEnglish

Labeling Experience

OneForma

Labeler — OneForma

OneformaTextClassification
As a Labeler at OneForma, I executed labeling tasks on structured and unstructured datasets using classification and tagging guidelines. I maintained high accuracy while meeting platform quality and production targets. My work supported structured data labeling for AI model training across multiple projects. • Labeled and classified various text datasets • Applied tagging guidelines for structured annotation • Met accuracy and throughput benchmarks • Supported text-based AI model training workflows

As a Labeler at OneForma, I executed labeling tasks on structured and unstructured datasets using classification and tagging guidelines. I maintained high accuracy while meeting platform quality and production targets. My work supported structured data labeling for AI model training across multiple projects. • Labeled and classified various text datasets • Applied tagging guidelines for structured annotation • Met accuracy and throughput benchmarks • Supported text-based AI model training workflows

2025 - Present

JavaScript Developer — Handshake

Other
As a JavaScript Developer at Handshake, I contributed to AI training pipelines by building and evaluating code-based tasks. My full-stack expertise was leveraged to produce and review code samples used in developer-focused annotation workflows. These efforts supported model learning and evaluation in coding-related contexts. • Built and assessed code tasks for AI training • Reviewed developer-focused annotation workflows • Supported coding challenge dataset creation • Ensured code quality for training efficiency

As a JavaScript Developer at Handshake, I contributed to AI training pipelines by building and evaluating code-based tasks. My full-stack expertise was leveraged to produce and review code samples used in developer-focused annotation workflows. These efforts supported model learning and evaluation in coding-related contexts. • Built and assessed code tasks for AI training • Reviewed developer-focused annotation workflows • Supported coding challenge dataset creation • Ensured code quality for training efficiency

2025 - Present
Mercor

Writer — Mercor

MercorTextPrompt Response Writing SFT
As a Writer at Mercor, I wrote high-quality prompts and structured content for AI training datasets, contributing to model fine-tuning. My work helped diversify and improve data quality using prompt engineering principles. I developed domain-relevant, structured training data for large language models. • Crafted prompts to enhance dataset diversity • Applied prompt engineering to improve data depth • Authored human-written content for model training • Supported dataset creation for AI domain adaptation

As a Writer at Mercor, I wrote high-quality prompts and structured content for AI training datasets, contributing to model fine-tuning. My work helped diversify and improve data quality using prompt engineering principles. I developed domain-relevant, structured training data for large language models. • Crafted prompts to enhance dataset diversity • Applied prompt engineering to improve data depth • Authored human-written content for model training • Supported dataset creation for AI domain adaptation

2025 - Present

Labeler / Reviewer — Alignerr

TextFine Tuning
As a Labeler/Reviewer at Alignerr, I labeled and reviewed AI model outputs for supervised fine-tuning, focusing on response quality and adherence to guidelines. I evaluated agentic coding tasks for code correctness and efficiency in line with prompt requirements. Quality assurance feedback was provided to ensure annotation consistency across projects. • Adhered to annotation rubrics for dataset creation • Reviewed and evaluated AI-generated code outputs • Maintained labeling standards for quality assurance • Supported code and text-based AI model fine-tuning

As a Labeler/Reviewer at Alignerr, I labeled and reviewed AI model outputs for supervised fine-tuning, focusing on response quality and adherence to guidelines. I evaluated agentic coding tasks for code correctness and efficiency in line with prompt requirements. Quality assurance feedback was provided to ensure annotation consistency across projects. • Adhered to annotation rubrics for dataset creation • Reviewed and evaluated AI-generated code outputs • Maintained labeling standards for quality assurance • Supported code and text-based AI model fine-tuning

2025 - Present

AI Data Annotator & Evaluator — Outlier AI

Text
As an AI Data Annotator & Evaluator at Outlier AI, I performed multi-turn conversation evaluation and response scoring across several business-focused projects. My responsibilities included authoring detailed preference justifications and calibrating evaluation dimensions for quality assurance. I contributed to reinforcement learning from human feedback pipelines and evaluated AI-generated math prompts and responses. • Built and assessed conversation tasks to evaluate model performance • Authored structured evaluation justifications and feedback • Supported RLHF data pipelines through response ranking • Applied text formatting and rubric-based scoring methods

As an AI Data Annotator & Evaluator at Outlier AI, I performed multi-turn conversation evaluation and response scoring across several business-focused projects. My responsibilities included authoring detailed preference justifications and calibrating evaluation dimensions for quality assurance. I contributed to reinforcement learning from human feedback pipelines and evaluated AI-generated math prompts and responses. • Built and assessed conversation tasks to evaluate model performance • Authored structured evaluation justifications and feedback • Supported RLHF data pipelines through response ranking • Applied text formatting and rubric-based scoring methods

2025 - Present

Education

S

Southern New Hampshire University

Bachelor of Science, Information Technology

Bachelor of Science
2016 - 2023

Work History

J

Jackson Hewitt

Tax Preparer

Kansas City, MO
2025 - Present
A

Amazon

Area Manager

Kansas City, MO
2024 - 2025