For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
Evancarson Jesejeri

Evancarson Jesejeri

Skilled in precise data labeling for NLP, CV, and audio ML applications 3+

USA flagAlabama, Usa
$30.00/hrExpertAws SagemakerAppenGoogle Cloud Vertex AI

Key Skills

Software

AWS SageMakerAWS SageMaker
AppenAppen
Google Cloud Vertex AIGoogle Cloud Vertex AI
LionbridgeLionbridge
RemotasksRemotasks
SamaSama
Scale AIScale AI

Top Subject Matter

No subject matter listed

Top Data Types

AudioAudio
Computer Code ProgrammingComputer Code Programming
ImageImage

Top Task Types

Audio Recording
Computer Programming Coding
Fine Tuning
Prompt Response Writing SFT
RLHF

Freelancer Overview

I have over 5 years of experience in AI training and data labeling, working extensively with large-scale datasets to train and fine-tune state-of-the-art models, including large language models (LLMs). My work has involved reinforcement learning from human feedback (RLHF), prompt evaluation, conversational AI tuning, and quality control for AI-generated content. I’ve collaborated on projects that required high-precision annotation, ranking model responses for coherence, factual accuracy, bias, and tone. My hands-on experience with tools like Label Studio, Appen, Scale AI, and proprietary platforms at Google DeepMind has allowed me to develop deep insight into the human-in-the-loop AI training pipeline. What sets me apart is my ability to blend technical fluency with linguistic and contextual sensitivity—ensuring not just accuracy in annotations, but also ethical and user-aligned outputs. I have a strong background in NLP, data quality assessment, and multilingual prompt review. Whether working on LLM alignment, toxicity filtering, or dialog flow optimization, I bring a meticulous, feedback-driven approach that enhances model performance and trustworthiness.

ExpertEnglish

Labeling Experience

Appen

LLM Response Ranking and Prompt Evaluation for Conversational AI

AppenImageClassificationText Generation
This project focused on the human evaluation and tuning of large language model (LLM) outputs using Reinforcement Learning from Human Feedback (RLHF). My role involved ranking AI-generated responses based on accuracy, coherence, tone, and relevance to user prompts. I also performed prompt+response crafting for supervised fine-tuning (SFT) and flagged outputs that were biased, toxic, or misleading. The data was primarily in English, with some exposure to multilingual prompts. Over the course of the project, I annotated and evaluated over 10,000 data points, following strict quality control protocols and review guidelines set by the model alignment team. Tools used included Appen, Scale AI, and proprietary platforms developed by DeepMind.

This project focused on the human evaluation and tuning of large language model (LLM) outputs using Reinforcement Learning from Human Feedback (RLHF). My role involved ranking AI-generated responses based on accuracy, coherence, tone, and relevance to user prompts. I also performed prompt+response crafting for supervised fine-tuning (SFT) and flagged outputs that were biased, toxic, or misleading. The data was primarily in English, with some exposure to multilingual prompts. Over the course of the project, I annotated and evaluated over 10,000 data points, following strict quality control protocols and review guidelines set by the model alignment team. Tools used included Appen, Scale AI, and proprietary platforms developed by DeepMind.

2020 - 2025

Education

T

Toronto university

bachelor in computer science, computer science-AI specialization

bachelor in computer science
2016 - 2020
U

University of Toronto

Bachelor of Science, Computer Science

Bachelor of Science
2016 - 2020

Work History

G

Google DeepMind

AI Training Specialist

London
2020 - 2025