For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
P
Pranava Mittal

Pranava Mittal

AI Data Annotator| AI Response Evaluation| LLM Response Evaluation | Prompt writer

India flagPanchkula, India
$15.00/hrIntermediateScale AIAppen

Key Skills

Software

Scale AIScale AI
AppenAppen

Top Subject Matter

AI response evaluation
Data Annotation and Labeling
Prompt writer and evaluator

Top Data Types

TextText
ImageImage
VideoVideo

Top Task Types

Entity (NER) ClassificationEntity (NER) Classification
Bounding BoxBounding Box
Text GenerationText Generation
Object DetectionObject Detection
Data CollectionData Collection
ClassificationClassification
SegmentationSegmentation
Question AnsweringQuestion Answering
Text SummarizationText Summarization

Freelancer Overview

AI Response Evaluation & Debugging Practice (Self-Driven). Core strengths include Scale AI, Appen, and N. Education includes Bachelor of Technology, Chitkara University (2021). AI-training focus includes data types such as Text and labeling workflows including Evaluation, Rating, and Entity (NER) Classification.

IntermediateEnglishHindiPunjabi

Labeling Experience

Appen

Text Annotation & Conversational Data Labeling (Freelance/Project-Based)

AppenTextEntity Ner Classification
During various text annotation projects, I performed intent labeling, sentiment tagging, named entity recognition (NER), and span marking on diverse text datasets. I was responsible for marking relevant entities, labeling conversational turns, and tagging text according to provided annotation guidelines. My work aimed to improve the accuracy of AI systems in recognizing intent, emotion, and structured information from language data. • Conducted bilingual annotation tasks in both English and Hindi • Labeled dialogue, intent, sentiment, and entity types within conversations • Worked with annotation platforms similar to Scale AI, Appen, Outlier, and Remotasks • Flagged unsafe or biased responses and contributed to dataset quality assurance

During various text annotation projects, I performed intent labeling, sentiment tagging, named entity recognition (NER), and span marking on diverse text datasets. I was responsible for marking relevant entities, labeling conversational turns, and tagging text according to provided annotation guidelines. My work aimed to improve the accuracy of AI systems in recognizing intent, emotion, and structured information from language data. • Conducted bilingual annotation tasks in both English and Hindi • Labeled dialogue, intent, sentiment, and entity types within conversations • Worked with annotation platforms similar to Scale AI, Appen, Outlier, and Remotasks • Flagged unsafe or biased responses and contributed to dataset quality assurance

2022 - Present
Scale AI

AI Response Evaluation & Debugging Practice (Self-Driven)

Scale AIText
As an independent AI response evaluator, I reviewed and rated AI-generated outputs for accuracy, clarity, and logical consistency across a wide range of text tasks. My main focus was on evaluating model responses for helpfulness, safety, and correctness, as practiced in RLHF (Reinforcement Learning from Human Feedback) pipelines. I assessed prompts in diverse categories such as coding, general knowledge, creative writing, and Q&A to enhance AI reliability and performance. • Rated outputs based on structured quality, helpfulness, and relevance factors • Provided detailed written feedback aligned with annotation guidelines • Identified reasoning errors, language issues, and factual inaccuracies • Developed a solid workflow understanding of annotation and evaluation within RLHF contexts

As an independent AI response evaluator, I reviewed and rated AI-generated outputs for accuracy, clarity, and logical consistency across a wide range of text tasks. My main focus was on evaluating model responses for helpfulness, safety, and correctness, as practiced in RLHF (Reinforcement Learning from Human Feedback) pipelines. I assessed prompts in diverse categories such as coding, general knowledge, creative writing, and Q&A to enhance AI reliability and performance. • Rated outputs based on structured quality, helpfulness, and relevance factors • Provided detailed written feedback aligned with annotation guidelines • Identified reasoning errors, language issues, and factual inaccuracies • Developed a solid workflow understanding of annotation and evaluation within RLHF contexts

2022 - Present

AI-based Mental Health Support System – Data Annotation/Evaluation

Text
For AI-based mental health support system concept development, I evaluated AI-generated conversation outputs for sensitivity, context awareness, and safety. I applied annotation criteria for sensitive or vulnerable conversations to ensure ethically appropriate responses. This work contributed to developing more empathetic and harm-mitigated AI solutions for mental health support. • Assessed conversational context for user well-being and response appropriateness • Flagged potential harm, bias, and insensitive content in outputs • Explored criteria specific to handling sensitive information in AI dialogue • Debugged response logic to improve outcome reliability and precision

For AI-based mental health support system concept development, I evaluated AI-generated conversation outputs for sensitivity, context awareness, and safety. I applied annotation criteria for sensitive or vulnerable conversations to ensure ethically appropriate responses. This work contributed to developing more empathetic and harm-mitigated AI solutions for mental health support. • Assessed conversational context for user well-being and response appropriateness • Flagged potential harm, bias, and insensitive content in outputs • Explored criteria specific to handling sensitive information in AI dialogue • Debugged response logic to improve outcome reliability and precision

2022 - 2023

Education

C

Chitkara University

Bachelor of Technology, Computer Science Engineering (Artificial Intelligence and Machine Learning)

Bachelor of Technology
2021

Work History

O

Outlier.ai

Data Annotator

Panchkula
2026 - Present