For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
P

Pratibha Verma

Coding Expert (LLM RLHF Evaluator)

INDIA flag
Delhi, India
ExpertMercor

Key Skills

Software

MercorMercor

Top Subject Matter

Frontend Engineering
Computer Programming
Mathematics Domain Expertise

Top Data Types

TextText

Top Task Types

RLHF
Red Teaming

Freelancer Overview

Coding Expert (LLM RLHF Evaluator). Brings 7+ years of professional experience across complex professional workflows, research, and quality-focused execution. Core strengths include Mercor, Turing Enterprises Inc., and Handshake AI. Education includes Master of Technology, Birla Institute of Technology and Science (BITS), Pilani (2025) and Doctor of Philosophy, Indian Institute of Technology Roorkee (2024). AI-training focus includes data types such as Computer Code, Programming, and Text and labeling workflows including RLHF, Evaluation, and Rating.

Expert

Labeling Experience

Mercor

Coding Expert (LLM RLHF Evaluator)

MercorRLHF
As a Coding Expert, I evaluated LLM-generated frontend applications using a structured RLHF framework. My work involved rating outputs based on code quality, functionality, UI/UX, and correctness while providing targeted human feedback. This contributed to the fine-tuning and improved alignment of large language models for software engineering tasks. • Implemented structured evaluation metrics to assess programming outputs • Provided actionable feedback for iterative model improvements • Identified strengths and weaknesses in LLM frontend engineering capabilities • Enhanced LLM utility in practical code generation scenarios

As a Coding Expert, I evaluated LLM-generated frontend applications using a structured RLHF framework. My work involved rating outputs based on code quality, functionality, UI/UX, and correctness while providing targeted human feedback. This contributed to the fine-tuning and improved alignment of large language models for software engineering tasks. • Implemented structured evaluation metrics to assess programming outputs • Provided actionable feedback for iterative model improvements • Identified strengths and weaknesses in LLM frontend engineering capabilities • Enhanced LLM utility in practical code generation scenarios

2026 - Present

MOVE Fellowship – AI Trainer

Text
As an AI Trainer and MOVE Fellow, I participated in an interdisciplinary rubric-based project evaluating AI problem-solving approaches across STEM fields. This involved designing, assessing, and rating research problems that spanned mathematics, biology, physics, and chemistry. My role contributed quantitative and qualitative metrics for LLM performance improvement. • Designed interdisciplinary prompts for AI evaluation • Executed rubric-based assessments to measure model reasoning • Provided structured feedback on AI-generated research responses • Supported development of cross-domain problem-solving benchmarks

As an AI Trainer and MOVE Fellow, I participated in an interdisciplinary rubric-based project evaluating AI problem-solving approaches across STEM fields. This involved designing, assessing, and rating research problems that spanned mathematics, biology, physics, and chemistry. My role contributed quantitative and qualitative metrics for LLM performance improvement. • Designed interdisciplinary prompts for AI evaluation • Executed rubric-based assessments to measure model reasoning • Provided structured feedback on AI-generated research responses • Supported development of cross-domain problem-solving benchmarks

2025 - Present
Mercor

AI Trainer – Mathematics Expert

MercorText
As an AI Trainer – Mathematics Expert, I curated educational prompts and evaluated LLM outputs in mathematical and data reasoning domains. My responsibilities included creating detailed solution paths and supporting prompt alignment tasks for generative model applications. This work directly informed the fine-tuning and assessment of AI reasoning abilities in mathematics. • Designed and curated math prompts for LLM consumption • Evaluated complex mathematical responses for correctness and depth • Contributed to rubric-based model alignment processes • Ensured LLM outputs met the standards for educational and technical rigor

As an AI Trainer – Mathematics Expert, I curated educational prompts and evaluated LLM outputs in mathematical and data reasoning domains. My responsibilities included creating detailed solution paths and supporting prompt alignment tasks for generative model applications. This work directly informed the fine-tuning and assessment of AI reasoning abilities in mathematics. • Designed and curated math prompts for LLM consumption • Evaluated complex mathematical responses for correctness and depth • Contributed to rubric-based model alignment processes • Ensured LLM outputs met the standards for educational and technical rigor

2025 - Present

Research Analyst – Pod Lead (Math, RLHF & Adversarial Testing)

TextRed Teaming
As a Research Analyst and Pod Lead in Mathematics, I contributed to the creation of PhD-level exam questions and adversarial prompts for LLM stress testing. My work involved detailed review, correction, and grading of model-generated mathematical responses to enhance model robustness and clarity. I participated actively in math-focused LLM training data refinement and rubrics design for model alignment. • Developed advanced mathematics benchmarks for LLM evaluation • Refined and corrected LLM mathematical output for accuracy • Built edge-case and adversarial scenarios to expose model limitations • Improved subject matter alignment through rubrics and iterative review

As a Research Analyst and Pod Lead in Mathematics, I contributed to the creation of PhD-level exam questions and adversarial prompts for LLM stress testing. My work involved detailed review, correction, and grading of model-generated mathematical responses to enhance model robustness and clarity. I participated actively in math-focused LLM training data refinement and rubrics design for model alignment. • Developed advanced mathematics benchmarks for LLM evaluation • Refined and corrected LLM mathematical output for accuracy • Built edge-case and adversarial scenarios to expose model limitations • Improved subject matter alignment through rubrics and iterative review

2024 - Present

AI Data Trainer – Math & Reasoning

Text
As an AI Data Trainer focused on Math & Reasoning, I developed and evaluated advanced test cases for LLMs in mathematical problem-solving. My role consisted of creating complex queries and reviewing AI-driven reasoning to guide model enhancements. These responsibilities resulted in actionable improvements for LLM mathematical and logical accuracy. • Created and assessed mathematical test cases for LLMs • Evaluated complex reasoning and solution correctness • Provided comprehensive feedback for AI model improvements • Enhanced product quality and AI-generated educational outputs

As an AI Data Trainer focused on Math & Reasoning, I developed and evaluated advanced test cases for LLMs in mathematical problem-solving. My role consisted of creating complex queries and reviewing AI-driven reasoning to guide model enhancements. These responsibilities resulted in actionable improvements for LLM mathematical and logical accuracy. • Created and assessed mathematical test cases for LLMs • Evaluated complex reasoning and solution correctness • Provided comprehensive feedback for AI model improvements • Enhanced product quality and AI-generated educational outputs

2024 - 2025

Education

I

Indian Institute of Technology Roorkee

Doctor of Philosophy, Mathematics

Doctor of Philosophy
2017 - 2024
I

Indian Institute of Technology Roorkee

Master of Science, Industrial Mathematics and Informatics

Master of Science
2015 - 2015

Work History

D

Delhi Technological University

Guest Faculty

Delhi
2023 - 2023
I

IIT Roorkee

Teaching Assistant

Roorkee
2017 - 2019