For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
Fredrick Kabaria

Fredrick Kabaria

AI Data Annotator/Prompt Evaluator

KENYA flag
N/A, Kenya
$25.00/hrIntermediateLabelboxProdigyScale AI

Key Skills

Software

LabelboxLabelbox
ProdigyProdigy
Scale AIScale AI
AWS SageMakerAWS SageMaker

Top Subject Matter

AI prompt evaluation and data annotation

Top Data Types

TextText

Top Task Types

Evaluation Rating
Text Summarization
Question Answering

Freelancer Overview

I have experience supporting AI training workflows through text annotation, prompt evaluation, and response quality rating. My work focuses on reviewing prompts and AI-generated outputs to improve model performance and reliability. Key responsibilities include: Evaluating AI responses for accuracy, clarity, and relevance Rating outputs based on instruction-following and helpfulness Performing text classification, tagging, and sentiment labeling Identifying inconsistencies, hallucinations, and bias in AI responses Comparing multiple AI-generated answers and selecting the most accurate and useful response Following annotation guidelines to maintain high-quality labeled datasets I am comfortable working with structured evaluation frameworks, reviewing datasets, and maintaining consistent labeling standards to support the training and improvement of large language models.

IntermediateSwahiliEnglish

Labeling Experience

AI Prompt Evaluation and Text Annotation Project

TextEvaluation Rating
I worked on AI training tasks involving text annotation, prompt evaluation, and response quality assessment. The project focused on improving the performance of large language models by reviewing prompts and rating generated responses based on accuracy, relevance, clarity, and safety. Key tasks included evaluating AI outputs, classifying text data, identifying hallucinations and inconsistencies, and comparing multiple responses to determine the most helpful answer. I followed structured annotation guidelines and quality scoring frameworks to ensure consistency and high-quality labeled datasets. The project involved reviewing large batches of prompts and responses, applying standardized labeling criteria, and providing feedback to improve model alignment and performance.

I worked on AI training tasks involving text annotation, prompt evaluation, and response quality assessment. The project focused on improving the performance of large language models by reviewing prompts and rating generated responses based on accuracy, relevance, clarity, and safety. Key tasks included evaluating AI outputs, classifying text data, identifying hallucinations and inconsistencies, and comparing multiple responses to determine the most helpful answer. I followed structured annotation guidelines and quality scoring frameworks to ensure consistency and high-quality labeled datasets. The project involved reviewing large batches of prompts and responses, applying standardized labeling criteria, and providing feedback to improve model alignment and performance.

2024 - Present

Education

M

MMUST

Diploma, Data Analysis

Diploma
2023 - 2026

Work History

O

OMNI

Market Analyst

nyahururu
2023 - 2024