For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
P
Prathamesh Sonawane

Prathamesh Sonawane

Agency
India flagJaipur, India
$30.00/hrIntermediate45+

Key Skills

Software

Other
Data Annotation TechData Annotation Tech
CVATCVAT
LabelboxLabelbox

Top Subject Matter

No subject matter listed

Top Data Types

TextText
DocumentDocument
Computer Code ProgrammingComputer Code Programming

Top Task Types

Data CollectionData Collection
Prompt + Response Writing (SFT)Prompt + Response Writing (SFT)
Computer Programming/CodingComputer Programming/Coding
RLHFRLHF
Fine-tuningFine-tuning

Company Overview

Floin AI is a global B2B artificial intelligence services company headquartered in Jaipur, India, founded in 2026. Established by a team of senior engineers with deep expertise across data science, machine learning, and software quality assurance, Floin AI was built on a single conviction - that AI companies deserve one reliable partner for their entire data and quality pipeline. With a dedicated team of 45 professionals, Floin AI delivers smart, scalable, and precision-driven AI services to clients across North America, Europe, Asia, and beyond. We consolidate data annotation, model training support, and software testing under one roof - eliminating vendor fragmentation and ensuring consistent quality at every layer of the AI development cycle.

IntermediateEnglishHindi

Security

Security Overview

We implement comprehensive security measures to safeguard client data and project integrity across all operations. **Physical Security:** Our workstations are located in secure environments with controlled access. Only authorized personnel are allowed entry, and systems are protected with device-level authentication. Where applicable, we implement surveillance measures and restricted workspace policies to prevent unauthorized access. **Cybersecurity Measures:** We use secure network infrastructure with firewalls, antivirus protection, and regular system updates to mitigate vulnerabilities. Data is transmitted over encrypted channels, and sensitive information is stored in secure environments with access control mechanisms. **Access Control & Confidentiality:** Access to client data is strictly role-based, ensuring that only authorized team members can access specific datasets. All employees and contractors are required to sign non-disclosure agreements (NDAs) and follow strict confidentiality and data handling policies. **Employee Training & Data Handling:** Our team is trained in data privacy best practices, including secure handling, processing, and storage of sensitive information. We enforce policies to prevent data leakage, unauthorized sharing, or misuse of client data. **Audits & Compliance:** We conduct periodic internal audits to ensure adherence to security protocols and continuously improve our processes. Our practices are aligned with industry standards and client-specific compliance requirements, including GDPR-aware data handling where applicable. We are committed to maintaining the highest standards of security, privacy, and trust in all our client engagements.

Labeling Experience

LLM Code Evaluation & RLHF Annotation for Software Engineering Tasks (Project Marlin V3)

OtherTextRLHFEvaluation Rating
Worked as an expert contributor on Project Marlin V3, a Reinforcement Learning from Human Feedback (RLHF) initiative focused on improving large language models for software engineering tasks. In this project, I designed complex, real-world coding prompts based on actual GitHub pull requests across multiple programming languages (Python, JavaScript/TypeScript, Go, Rust, Java, and C++). I evaluated and compared multiple AI-generated solutions (model trajectories) by analyzing code correctness, test coverage, maintainability, and engineering quality. The workflow involved executing AI models using the claude-hfi CLI tool, reviewing generated diffs, running tests, and providing structured, evidence-based feedback to determine model performance. My role required deep understanding of software engineering principles, debugging, refactoring, and system design. The evaluation outputs contributed directly to training and improving AI models using human preference data, ensuring higher-quality code generation and better real-world performance. Tools used included claude-hfi CLI, VS Code, Git, Snorkel Expert Platform, and Python-based utilities for validation and submission.

Worked as an expert contributor on Project Marlin V3, a Reinforcement Learning from Human Feedback (RLHF) initiative focused on improving large language models for software engineering tasks. In this project, I designed complex, real-world coding prompts based on actual GitHub pull requests across multiple programming languages (Python, JavaScript/TypeScript, Go, Rust, Java, and C++). I evaluated and compared multiple AI-generated solutions (model trajectories) by analyzing code correctness, test coverage, maintainability, and engineering quality. The workflow involved executing AI models using the claude-hfi CLI tool, reviewing generated diffs, running tests, and providing structured, evidence-based feedback to determine model performance. My role required deep understanding of software engineering principles, debugging, refactoring, and system design. The evaluation outputs contributed directly to training and improving AI models using human preference data, ensuring higher-quality code generation and better real-world performance. Tools used included claude-hfi CLI, VS Code, Git, Snorkel Expert Platform, and Python-based utilities for validation and submission.

2026 - Present