For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
E
Emmanuel Akande

Emmanuel Akande

Agency
Nigeria flagKaduna, Nigeria
$20.00/hrIntermediate10+

Key Skills

Software

CVATCVAT
LabelboxLabelbox
TelusTelus
Other

Top Subject Matter

No subject matter listed

Top Data Types

TextText
ImageImage
VideoVideo

Top Task Types

RLHFRLHF
Prompt + Response Writing (SFT)Prompt + Response Writing (SFT)
Evaluation/RatingEvaluation/Rating
Red TeamingRed Teaming
Function CallingFunction Calling

Company Overview

SabiSnap is a specialized AI training and data annotation agency powered by a curated workforce of mentored AI Engineers and tech professionals. We bridge the gap between complex model requirements and high-integrity human feedback. Our Core Edge: EdTech & Engineering Intelligence Born from the SabiSnap EdTech ecosystem, our team specializes in high-reasoning tasks and pedagogical data alignment. We don't just label data; we understand the logic behind it. This makes us the ideal partner for labs developing educational LLMs, reasoning agents, and intelligent tutoring systems. Our Services: Specialized Annotation: High-accuracy image, video, and text labeling with a focus on technical and regional nuance. RLHF & Model Tuning: Providing high-reasoning feedback and "Chain of Thought" rationales for LLM alignment. Domain Expertise: Specialized datasets for Education, African linguistic contexts, and AI-driven automation. The Sabi Standard: Every project is managed through a multi-tier "Maker-Checker" quality assurance process led by senior AI developers. Our team is proficient in industry-standard tools including CVAT, Labelbox, and Scale AI, ensuring seamless integration into your existing workflows.

IntermediateEnglish

Security

Security Overview

Vetted Workforce: All annotators are students within our long-term tech mentorship program, ensuring a high level of personal accountability and professional alignment. Non-Disclosure Agreements (NDAs): Every team member is required to sign strict confidentiality agreements before accessing any client data. Secure Access: We enforce Multi-Factor Authentication (MFA) for all platform logins and use encrypted communication channels (Slack/Discord) for project coordination. Data Segregation: Work is performed directly within the client’s preferred labeling environment (e.g., Labelbox, CVAT), ensuring no data is ever downloaded or stored on local student devices. Clean Desk Policy: Mentorship includes training on data privacy best practices, including prohibited use of external recording devices during active labeling sessions.

Labeling Experience

Autonomous Agent Tool-Use & API Alignment

OtherComputer Code ProgrammingFunction Calling
Managed the development of training datasets for Agentic AI systems, focusing on "Tool-Use" and "Function Calling" capabilities. The project aimed to train models to autonomously determine when to call an external API, how to format the JSON request, and how to interpret the return data to solve a user's multi-step goal. Key Tasks: Trace Analysis: Audited model-generated execution "traces" to ensure the AI took the most efficient logical path to a solution. JSON Schema Validation: Annotated and corrected thousands of function call arguments to ensure 100% adherence to technical schemas. API Response Interpretation: Trained models to handle "Edge Cases," such as API timeouts or malformed data, by providing corrective natural language feedback. Workflow Orchestration: Structured multi-turn dialogues where the AI had to manage state across several distinct tool interactions. Quality Measures: Implemented a unit-testing approach to data validation, where every annotated function call was programmatically verified for syntax accuracy before being added to the final dataset.

Managed the development of training datasets for Agentic AI systems, focusing on "Tool-Use" and "Function Calling" capabilities. The project aimed to train models to autonomously determine when to call an external API, how to format the JSON request, and how to interpret the return data to solve a user's multi-step goal. Key Tasks: Trace Analysis: Audited model-generated execution "traces" to ensure the AI took the most efficient logical path to a solution. JSON Schema Validation: Annotated and corrected thousands of function call arguments to ensure 100% adherence to technical schemas. API Response Interpretation: Trained models to handle "Edge Cases," such as API timeouts or malformed data, by providing corrective natural language feedback. Workflow Orchestration: Structured multi-turn dialogues where the AI had to manage state across several distinct tool interactions. Quality Measures: Implemented a unit-testing approach to data validation, where every annotated function call was programmatically verified for syntax accuracy before being added to the final dataset.

2025 - Present

Multimodal LLM Fine-Tuning & RLHF

Internal Proprietary ToolingTextRLHF
Managed a specialized workflow for high-reasoning LLM alignment, focusing on Reinforcement Learning from Human Feedback (RLHF) and Supervised Fine-Tuning (SFT). The scope involved ranking complex model outputs for factual accuracy, logical consistency, and safety guardrails. Key Tasks: Evaluated Chain-of-Thought (CoT) reasoning for technical and mathematical prompts. Performed "Red Teaming" to identify and mitigate model hallucinations and bias. Annotated multimodal data (Video-to-Text) to improve scene understanding for next-gen models. Quality Measures: > Adhered to a strict 98%+ accuracy threshold with a multi-tier "Lead-Checker" audit system to ensure data integrity before delivery to the client.

Managed a specialized workflow for high-reasoning LLM alignment, focusing on Reinforcement Learning from Human Feedback (RLHF) and Supervised Fine-Tuning (SFT). The scope involved ranking complex model outputs for factual accuracy, logical consistency, and safety guardrails. Key Tasks: Evaluated Chain-of-Thought (CoT) reasoning for technical and mathematical prompts. Performed "Red Teaming" to identify and mitigate model hallucinations and bias. Annotated multimodal data (Video-to-Text) to improve scene understanding for next-gen models. Quality Measures: > Adhered to a strict 98%+ accuracy threshold with a multi-tier "Lead-Checker" audit system to ensure data integrity before delivery to the client.

2025 - Present
Labelbox

Generative AI Image Evaluation & Alignment

LabelboxImageRLHFQuestion Answering
Performed high-level quality assessment and preference ranking for AI-generated imagery to improve model realism and prompt adherence. The project focused on aligning model outputs with human aesthetic and functional expectations. Key Tasks: Side-by-Side (SxS) Ranking: Evaluated multiple model outputs for a single prompt, selecting the "winner" based on composition, lighting, and detail. Prompt Adherence: Verified if the generated image strictly followed complex, multi-layered text instructions. Deformity Detection: Identified "hallucinations" in images, such as anatomical errors (extra fingers), warped textures, or gravity-defying artifacts. Safety & Bias Auditing: Flagged content that violated safety guidelines or exhibited unwanted social biases. Quality Measures: Maintained a high consensus score among peer evaluators, consistently delivering labels that met the client’s "gold standard" for aesthetic and technical quality.

Performed high-level quality assessment and preference ranking for AI-generated imagery to improve model realism and prompt adherence. The project focused on aligning model outputs with human aesthetic and functional expectations. Key Tasks: Side-by-Side (SxS) Ranking: Evaluated multiple model outputs for a single prompt, selecting the "winner" based on composition, lighting, and detail. Prompt Adherence: Verified if the generated image strictly followed complex, multi-layered text instructions. Deformity Detection: Identified "hallucinations" in images, such as anatomical errors (extra fingers), warped textures, or gravity-defying artifacts. Safety & Bias Auditing: Flagged content that violated safety guidelines or exhibited unwanted social biases. Quality Measures: Maintained a high consensus score among peer evaluators, consistently delivering labels that met the client’s "gold standard" for aesthetic and technical quality.

2025 - 2025