For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
K
Kevin Kadhaya

Kevin Kadhaya

Agency
Uganda flagKAMPALA, Uganda
$120.00/hrEntry Level4+

Key Skills

Software

LabelboxLabelbox
Surge AISurge AI
V7 LabsV7 Labs

Top Subject Matter

No subject matter listed

Top Data Types

ImageImage
TextText
AudioAudio

Top Task Types

Bounding BoxBounding Box
SegmentationSegmentation
Question AnsweringQuestion Answering
Text GenerationText Generation
Object DetectionObject Detection

Company Overview

AFRICAN COMPANY INVESTED IN DATA ANNOTATION AND COLLECTION ACROSS VARIOUS GENERALISTIC TENDENCIES OF SOCIETY.

Entry LevelEnglish

Security

Security Overview

We prioritize the security and confidentiality of client data at every stage of our AI training and deployment processes. All sensitive data is encrypted both in transit and at rest, and we apply strict data minimization and anonymization techniques when training models. Access to systems and datasets is restricted through role-based controls and multi-factor authentication, ensuring only authorized personnel can interact with sensitive information. Our practices align with internationally recognized data protection standards and applicable regulations. “We maintain continuous system monitoring and have a defined incident response process to quickly address any potential threats. This ensures your data remains confidential and protected from unauthorized access.

Labeling Experience

Data Annotation Tech

AI Training Contributor | Response Evaluation | Multimodal Annotation | Prompt Engineering

Data Annotation TechTextQuestion AnsweringText Summarization
I contributed to multiple project-based AI training workflows focused on improving model performance across text and multimodal tasks. The scope of the projects involved evaluating, comparing, and refining AI-generated outputs to ensure accuracy, coherence, and alignment with task-specific guidelines. These projects were delivered on an intermittent basis over approximately one year, spanning different task types and evolving requirements. My responsibilities included AI response evaluation, human-to-human (H2H) comparisons, and multimodal annotation tasks such as image-to-text and video-to-text assessments. I also performed dense structured grounding to verify factual alignment, engaged in prompt writing to improve model outputs, and completed quality assurance tasks to identify errors and inconsistencies. The project work was high-volume and iterative, involving repeated task cycles and continuous exposure to new guidelines and evaluation frameworks. This required maintaining consistency across tasks while adapting to changing instructions and quality expectations. To ensure quality, I adhered strictly to detailed annotation guidelines and evaluation rubrics, applied consistency checks across outputs, and focused on identifying subtle errors such as hallucinations, logical inconsistencies, and misalignment with source material. I maintained a high level of attention to detail and contributed to quality assurance processes that ensured reliable and accurate data labeling across project cycles.

I contributed to multiple project-based AI training workflows focused on improving model performance across text and multimodal tasks. The scope of the projects involved evaluating, comparing, and refining AI-generated outputs to ensure accuracy, coherence, and alignment with task-specific guidelines. These projects were delivered on an intermittent basis over approximately one year, spanning different task types and evolving requirements. My responsibilities included AI response evaluation, human-to-human (H2H) comparisons, and multimodal annotation tasks such as image-to-text and video-to-text assessments. I also performed dense structured grounding to verify factual alignment, engaged in prompt writing to improve model outputs, and completed quality assurance tasks to identify errors and inconsistencies. The project work was high-volume and iterative, involving repeated task cycles and continuous exposure to new guidelines and evaluation frameworks. This required maintaining consistency across tasks while adapting to changing instructions and quality expectations. To ensure quality, I adhered strictly to detailed annotation guidelines and evaluation rubrics, applied consistency checks across outputs, and focused on identifying subtle errors such as hallucinations, logical inconsistencies, and misalignment with source material. I maintained a high level of attention to detail and contributed to quality assurance processes that ensured reliable and accurate data labeling across project cycles.

2025 - Present