AI Data Trainer - Large Language Models (Mindrift / Toloka)
As a General AI Trainer for Mindrift (Toloka), I focused on evaluating, generating, and refining high-quality data to improve the helpfulness, accuracy, and safety of Large Language Models (LLMs). Key responsibilities and achievements included: Response Evaluation (RLHF): Critically assessed AI-generated text for factual accuracy, logical consistency, tone, and strict adherence to complex user prompts. Content Generation: Wrote high-quality prompts and ideal human responses across a wide variety of topics to support Supervised Fine-Tuning (SFT). Quality Assurance & Fact-Checking: Actively identified AI hallucinations, corrected logical flaws, and mitigated biases to ensure outputs met rigorous safety and quality benchmarks. Cross-Domain Expertise: Leveraged a diverse professional background—spanning IT support, healthcare, and the sciences—to accurately fact-check, edit, and annotate complex data across multiple subject areas. Workflow Efficiency: Consistently delivered nuanced, high-fidelity feedback and precise annotations within strict project deadlines.