For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
M
Manali Kale

Manali Kale

AI Model Contributor (LLM Evaluator)

India flagRemote, India
$30.00/hrIntermediateOther

Key Skills

Software

Other

Top Subject Matter

AI Model Evaluation
Code Quality
Documentation Enhancement

Top Data Types

TextText
DocumentDocument

Top Task Types

Computer Programming/CodingComputer Programming/Coding
ClassificationClassification
Question AnsweringQuestion Answering
Text SummarizationText Summarization
RLHFRLHF
TranscriptionTranscription
Function CallingFunction Calling
Prompt + Response Writing (SFT)Prompt + Response Writing (SFT)

Freelancer Overview

AI Model Contributor (LLM Evaluator). Brings 5+ years of professional experience across complex professional workflows, research, and quality-focused execution. Core strengths include Other. Education includes Master of Science, University of Colorado Boulder (2024) and Bachelor of Engineering, Cummins College of Engineering (2019). AI-training focus includes data types such as Computer Code and Programming and labeling workflows including Evaluation and Rating.

IntermediateEnglishMarathiHindi

Labeling Experience

AI Model Contributor (LLM Evaluator)

Other
I evaluated and improved code and documentation generated by large language models using structured prompts. My responsibilities included comparing outputs from multiple models and providing detailed, production-focused feedback to enhance accuracy and reliability. The focus was on real-world GitHub repositories and improving AI-generated technical outputs. • Designed and executed structured prompt experiments for LLM evaluation • Provided detailed ratings and feedback on generated code and documentation • Performed side-by-side output comparisons of multiple LLMs • Ensured workflows enhanced reliability and usability for enterprise use

I evaluated and improved code and documentation generated by large language models using structured prompts. My responsibilities included comparing outputs from multiple models and providing detailed, production-focused feedback to enhance accuracy and reliability. The focus was on real-world GitHub repositories and improving AI-generated technical outputs. • Designed and executed structured prompt experiments for LLM evaluation • Provided detailed ratings and feedback on generated code and documentation • Performed side-by-side output comparisons of multiple LLMs • Ensured workflows enhanced reliability and usability for enterprise use

2025 - 2026

Education

U

University of Colorado Boulder

Master of Science, Computer Science

Master of Science
2022 - 2024
C

Cummins College of Engineering

Bachelor of Engineering, Computer Engineering

Bachelor of Engineering
2016 - 2019

Work History

C

Changing The Present

Computer Science Intern

Remote
2024 - 2025
C

Citi

Software Engineer

Pune
2019 - 2022