For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
Viena Ouma

Viena Ouma

Expert in LLM Evaluation and Data annotation specialist

Kenya flagNairobi, Kenya
$5.00/hrExpertAppenClickworkerToloka

Key Skills

Software

AppenAppen
ClickworkerClickworker
TolokaToloka
TelusTelus
Internal/Proprietary Tooling
MindriftMindrift

Top Subject Matter

No subject matter listed

Top Data Types

ImageImage
TextText
VideoVideo

Top Task Types

Audio Recording
Classification
Data Collection
Evaluation Rating
Text Summarization

Freelancer Overview

Expert AI data labeling specialist and LLM evaluation expert with hands-on experience working on large-scale training data projects through platforms such as RWS, Toloka, and Mindrift. My work spans text, image, and video annotation, including prompt–response quality evaluation, safety and policy compliance review, dataset curation, and detailed classification tasks. I have contributed to projects involving supervised fine-tuning (SFT), content safety, educational data creation, and multimodal annotation. I excel at following complex guidelines, maintaining high accuracy, and delivering consistent, high-quality labels. My strengths include natural language understanding, analytical review, prompt design, and meticulous attention to detail helping improve model performance, reduce errors, and support the development of safer, more reliable AI systems.

ExpertSwahiliFrenchEnglish

Labeling Experience

LLM Text Evaluation & Instruction Tuning Annotator

Internal Proprietary ToolingTextText GenerationText Summarization
Worked on multiple projects focused on evaluating and improving large language models. My tasks included rating AI-generated responses for correctness, completeness, clarity, safety, and instruction-following, as well as creating high-quality prompt + response pairs for supervised fine-tuning (SFT). I labeled text for question answering, explanations, summarization, and conversation-style outputs, and flagged harmful, biased, or low-quality responses as part of red teaming and safety review. The projects involved thousands of tasks, strict adherence to detailed guidelines, and regular quality checks, where I consistently maintained high agreement with gold-standard labels and platform QA metrics.

Worked on multiple projects focused on evaluating and improving large language models. My tasks included rating AI-generated responses for correctness, completeness, clarity, safety, and instruction-following, as well as creating high-quality prompt + response pairs for supervised fine-tuning (SFT). I labeled text for question answering, explanations, summarization, and conversation-style outputs, and flagged harmful, biased, or low-quality responses as part of red teaming and safety review. The projects involved thousands of tasks, strict adherence to detailed guidelines, and regular quality checks, where I consistently maintained high agreement with gold-standard labels and platform QA metrics.

2025 - 2024
Mindrift

Text Classification & Content Quality Rater

MindriftTextClassificationEmotion Recognition
On Mindrift AI and similar platforms, I participated in text-based data labeling projects that involved classifying content by topic, sentiment, intent, and quality. I rated short texts, queries, and AI-generated snippets for relevance, usefulness, and policy compliance, and occasionally produced short summaries or improved versions of low-quality content. The work required completing a high volume of micro-tasks while maintaining strong accuracy and meeting platform quality thresholds. I strictly followed written instructions, passed qualification tests, and regularly reviewed feedback to keep my annotation performance aligned with project expectations.

On Mindrift AI and similar platforms, I participated in text-based data labeling projects that involved classifying content by topic, sentiment, intent, and quality. I rated short texts, queries, and AI-generated snippets for relevance, usefulness, and policy compliance, and occasionally produced short summaries or improved versions of low-quality content. The work required completing a high volume of micro-tasks while maintaining strong accuracy and meeting platform quality thresholds. I strictly followed written instructions, passed qualification tests, and regularly reviewed feedback to keep my annotation performance aligned with project expectations.

2023 - 2024

Education

U

University of Nairobi

Master of Science in Data Science, Data Science

Master of Science in Data Science
2024 - 2025
U

University of Nairobi

Bachelor of Science in Computer Science, Computer Science

Bachelor of Science in Computer Science
2021 - 2023

Work History

R

RWS TrainAI

AI Data Labeling & LLM Evaluation Contractor

Chalfont St Peter
2025 - Present
T

Tloka

AI Data annotator and Rater

Amsterdam
2024 - 2024