For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
A

Ashish Sharma

AI Code Evaluation and RLHF Output Quality Assessment (Freelance/Entrepreneurship)

INDIA flag
Bahadurgarh, India
$30.00/hrIntermediateLabelbox

Key Skills

Software

LabelboxLabelbox

Top Subject Matter

AI Code Evaluation and Reinforcement Learning from Human Feedback

Top Data Types

TextText

Top Task Types

RLHF

Freelancer Overview

AI Code Evaluation and RLHF Output Quality Assessment (Freelance/Entrepreneurship). Brings 4+ years of professional experience across complex professional workflows, research, and quality-focused execution. Core strengths include Labelbox. Education includes Bachelor of Technology, Indian Institute of Technology Jodhpur (2024) and Master of Science, Maharishi Dayanand University (2024). AI-training focus includes data types such as Computer Code and Programming and labeling workflows including RLHF.

IntermediateEnglishHindi

Labeling Experience

Labelbox

AI Code Evaluation and RLHF Output Quality Assessment (Freelance/Entrepreneurship)

LabelboxRLHF
Worked on evaluating AI-generated code outputs for correctness, efficiency, and adherence to best practices as part of RLHF-adjacent evaluation tasks. Responsibilities included prompt engineering, hallucination detection, and output quality assessment using LLM integration patterns. Familiar with fine-tuning concepts, evaluation rubrics, and safety considerations for large language models. • Evaluated Python, JavaScript, TypeScript, Java, C++, and SQL code outputs from AI models. • Identified and flagged hallucinated or logically incorrect code outputs. • Utilized platforms such as Labelbox, Alignerr, Scale AI, and DataAnnotation Tech for annotation tasks. • Worked with Claude API, OpenAI Realtime API, and Bland.ai for model evaluation and prompt testing.

Worked on evaluating AI-generated code outputs for correctness, efficiency, and adherence to best practices as part of RLHF-adjacent evaluation tasks. Responsibilities included prompt engineering, hallucination detection, and output quality assessment using LLM integration patterns. Familiar with fine-tuning concepts, evaluation rubrics, and safety considerations for large language models. • Evaluated Python, JavaScript, TypeScript, Java, C++, and SQL code outputs from AI models. • Identified and flagged hallucinated or logically incorrect code outputs. • Utilized platforms such as Labelbox, Alignerr, Scale AI, and DataAnnotation Tech for annotation tasks. • Worked with Claude API, OpenAI Realtime API, and Bland.ai for model evaluation and prompt testing.

2023 - Present

Education

M

Maharishi Dayanand University

Master of Science, Physics

Master of Science
2022 - 2024
M

Maharishi Dayanand University

Bachelor of Science, Non-Medical Sciences

Bachelor of Science
2019 - 2022

Work History

P

Piscis Web Studio

Founder & Full Stack Developer

Bahadurgarh
2023 - Present