For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
S

Samson Ibrahim

AI Trainer and LLM Evaluator (Software Engineering Domain)

USA flagSan Francisco, Usa
ExpertLabel StudioLabelbox

Key Skills

Software

Label StudioLabel Studio
LabelboxLabelbox

Top Subject Matter

Software Engineering
Backend Systems
Apis Domain Expertise

Top Data Types

TextText

Top Task Types

RLHFRLHF
Prompt + Response Writing (SFT)Prompt + Response Writing (SFT)

Freelancer Overview

AI Trainer and LLM Evaluator (Software Engineering Domain). Brings 6+ years of professional experience across complex professional workflows, research, and quality-focused execution. Core strengths include Label Studio, Labelbox, and Internal. Education includes Bachelor of Science, Moshood Abiola Polytechnic (2019). AI-training focus includes data types such as Computer Code, Programming, and Text and labeling workflows including Evaluation, Rating, and RLHF.

Expert

Labeling Experience

Label Studio

AI Trainer and LLM Evaluator (Software Engineering Domain)

Label Studio
I reviewed, evaluated, and rated AI-generated code and technical documentation outputs related to backend systems. My role involved assessing the accuracy, reliability, and quality of model outputs using explicit guidelines and structured annotation processes. I ensured the consistency and fairness of AI model evaluations by following standardized procedures. • Evaluated code generation quality in Node.js and TypeScript outputs • Performed RLHF ranking of LLM-generated technical responses • Conducted correctness and performance reviews for API and backend solutions • Contributed to documentation for annotation and evaluation guidelines

I reviewed, evaluated, and rated AI-generated code and technical documentation outputs related to backend systems. My role involved assessing the accuracy, reliability, and quality of model outputs using explicit guidelines and structured annotation processes. I ensured the consistency and fairness of AI model evaluations by following standardized procedures. • Evaluated code generation quality in Node.js and TypeScript outputs • Performed RLHF ranking of LLM-generated technical responses • Conducted correctness and performance reviews for API and backend solutions • Contributed to documentation for annotation and evaluation guidelines

2024 - Present
Labelbox

RLHF Annotator and Technical Output Evaluator

LabelboxTextRLHF
I participated in reward modeling and preference ranking tasks to optimize LLM outputs for natural language and code-related prompts. My responsibilities encompassed annotating data, ranking responses, and labeling technical accuracy and relevance for code explanations. Quality checks and batch consistency audits were integral to the process. • Labeled and ranked LLM outputs for technical question-answering • Provided RLHF annotations for software engineering prompts • Evaluated quality, safety, and helpfulness of generated text • Ensured annotation batch consistency across multiple review cycles

I participated in reward modeling and preference ranking tasks to optimize LLM outputs for natural language and code-related prompts. My responsibilities encompassed annotating data, ranking responses, and labeling technical accuracy and relevance for code explanations. Quality checks and batch consistency audits were integral to the process. • Labeled and ranked LLM outputs for technical question-answering • Provided RLHF annotations for software engineering prompts • Evaluated quality, safety, and helpfulness of generated text • Ensured annotation batch consistency across multiple review cycles

2023 - 2024

Prompt Engineer and SFT Data Creator

TextPrompt Response Writing SFT
I authored and reviewed high-quality prompts and responses for supervised fine-tuning of LLMs in software engineering and backend workflow reasoning. My duties included clear technical scenario documentation, instruction writing, and code explanation assessment. This involved both creation and technical editing of annotation guidelines and training data sets. • Created instructional prompts for AI learning • Reviewed model-generated responses for accuracy • Composed and maintained annotation guidelines • Edited datasets for clarity and consistency

I authored and reviewed high-quality prompts and responses for supervised fine-tuning of LLMs in software engineering and backend workflow reasoning. My duties included clear technical scenario documentation, instruction writing, and code explanation assessment. This involved both creation and technical editing of annotation guidelines and training data sets. • Created instructional prompts for AI learning • Reviewed model-generated responses for accuracy • Composed and maintained annotation guidelines • Edited datasets for clarity and consistency

2021 - 2023

Education

M

Moshood Abiola Polytechnic

Bachelor of Science, Business Administration

Bachelor of Science
2014 - 2019

Work History

K

Kubby Inc

Backend Engineer

San Francisco
2024 - Present
R

Revent Technologies

Backend Engineer

N/A
2023 - 2024