For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
Alain Morris

Alain Morris

LLM Evaluation & AI Reasoning Specialist (Code, Logic, QA)

South Korea flagSeoul, South Korea
$35.00/hrIntermediateAws SagemakerGoogle Cloud Vertex AILabelbox

Key Skills

Software

AWS SageMakerAWS SageMaker
Google Cloud Vertex AIGoogle Cloud Vertex AI
LabelboxLabelbox
Internal/Proprietary Tooling

Top Subject Matter

No subject matter listed

Top Data Types

Computer Code ProgrammingComputer Code Programming
TextText

Top Task Types

Computer Programming Coding
Function Calling
RLHF

Freelancer Overview

I have professional experience working on AI training and evaluation projects for leading organizations including Google, Meta, xAI, Apple, and Google Gemini, contributing to both code-focused and interface-level AI tasks. My work has involved evaluating and generating high-quality training data across Python and Java programming, frontend development (HTML, CSS, JavaScript, React), and UI-focused evaluation, ensuring correctness, usability, and adherence to task-specific guidelines. A significant portion of my work has focused on LLM function calling evaluation, where I assessed structured outputs, validated API and schema adherence, tested edge cases, and reviewed model reasoning across real-world scenarios. I bring a software engineering mindset to AI data labeling, emphasizing precision, consistency, and quality control. This makes me effective at identifying subtle errors, improving model reliability, and producing high-signal training data for advanced AI systems.

IntermediateEnglishSpanish

Labeling Experience

Expert Software Engineer

Internal Proprietary ToolingComputer Code ProgrammingRLHFComputer Programming Coding
Worked on a Google LLM Function Calling evaluation project focused on verifying and curating high-quality function-calling training data for production-grade language models. The task involved reviewing structured data samples consisting of a user query, a multi-step solution (sequence of function calls with outputs), and a final natural-language response, ensuring the entire pipeline correctly and completely answered the user’s intent Responsibilities included analyzing user intent, validating the overall solution completeness, and performing granular verification of individual function calls. This covered checking parameter correctness, groundedness (traceability to the user query or prior function outputs), relevance to the task, and identifying unnecessary or missing function calls. I also assessed the final response quality, ensuring it was fully supported by function results, free of hallucinations, and did not introduce unsupported or extraneous information.

Worked on a Google LLM Function Calling evaluation project focused on verifying and curating high-quality function-calling training data for production-grade language models. The task involved reviewing structured data samples consisting of a user query, a multi-step solution (sequence of function calls with outputs), and a final natural-language response, ensuring the entire pipeline correctly and completely answered the user’s intent Responsibilities included analyzing user intent, validating the overall solution completeness, and performing granular verification of individual function calls. This covered checking parameter correctness, groundedness (traceability to the user query or prior function outputs), relevance to the task, and identifying unnecessary or missing function calls. I also assessed the final response quality, ensuring it was fully supported by function results, free of hallucinations, and did not introduce unsupported or extraneous information.

2025 - 2025

Expert Frontend Engineer

Internal Proprietary ToolingComputer Code ProgrammingComputer Programming Coding
Worked on UI generation and evaluation projects for xAI and Meta, focused on improving large language models’ ability to generate, reason about, and evaluate user interface code and layouts. The work involved assessing AI-generated frontend outputs using HTML, CSS, and JavaScript, as well as UI-focused responses, ensuring they were functionally correct, visually coherent, and aligned with user intent. Responsibilities included reviewing model-generated UI code for structural correctness, semantic HTML usage, CSS layout validity, responsiveness considerations, and JavaScript behavior, as well as identifying rendering issues, broken interactions, and inconsistencies between the user request and the generated interface. I evaluated outputs for usability, clarity, and adherence to instructions, flagging hallucinated UI elements, missing components, or misinterpreted requirements.

Worked on UI generation and evaluation projects for xAI and Meta, focused on improving large language models’ ability to generate, reason about, and evaluate user interface code and layouts. The work involved assessing AI-generated frontend outputs using HTML, CSS, and JavaScript, as well as UI-focused responses, ensuring they were functionally correct, visually coherent, and aligned with user intent. Responsibilities included reviewing model-generated UI code for structural correctness, semantic HTML usage, CSS layout validity, responsiveness considerations, and JavaScript behavior, as well as identifying rendering issues, broken interactions, and inconsistencies between the user request and the generated interface. I evaluated outputs for usability, clarity, and adherence to instructions, flagging hallucinated UI elements, missing components, or misinterpreted requirements.

2024 - 2025

Education

U

University of Belize

Bachelor of Science, Computer Science

Bachelor of Science
2018 - 2022
K

Korea National University of Transportation

Master of Engineering, Mobility and Computer Science

Master of Engineering
2024

Work History

K

Korea National University of Transportation

Graduate Researcher

South Korea
2024 - Present
A

Adaviv

Software Engineer & Solution Architect

N/A
2022 - 2024