For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
U
Ujjwal Mishra

Ujjwal Mishra

LLM fine-tuning and evaluation on Computer Science QnA dataset

India flagBengaluru, India
$25.00/hrIntermediateScale AIRemotasks

Key Skills

Software

Scale AIScale AI
RemotasksRemotasks

Top Subject Matter

Computer Science QnA
Legal Services & Contract Review
Regulatory Compliance & Risk Analysis

Top Data Types

TextText
AudioAudio
DocumentDocument

Top Task Types

Fine-tuningFine-tuning

Freelancer Overview

LLM fine-tuning and evaluation on Computer Science QnA dataset. Brings 2+ years of professional experience across complex professional workflows, research, and quality-focused execution. Core strengths include Hugging Face. Education includes Bachelor of Science, Birla Institute of Technology and Science Pilani Hyderabad (2026). AI-training focus includes data types such as Text and labeling workflows including Fine-tuning.

IntermediateEnglish

Labeling Experience

Advanced AI Data Trainer

Computer Code ProgrammingFine Tuning
I created coding tasks in different programming languages and then helped the LLMs train

I created coding tasks in different programming languages and then helped the LLMs train

2024 - 2025

LLM fine-tuning and evaluation on Computer Science QnA dataset

TextFine Tuning
I designed and executed a fine-tuning pipeline for open-source large language models (LLMs) using a custom Computer Science QnA dataset. The process involved the preparation and structuring of over 1000 QnA pairs to train and benchmark model outputs using both human and automated metrics. Parameter-efficient techniques and quantization strategies were employed to enhance model performance and streamline the AI training process. • Designed data splits and labeling strategies to facilitate RLHF and evaluation • Labeled outputs using BLEU, ROUGE-L, and semantic similarity for thorough benchmarking • Fine-tuned an LLM using LoRA and QLoRA approaches on curated datasets • Evaluated prompting methods (zero-shot, few-shot, CoT) on labeled QnA data

I designed and executed a fine-tuning pipeline for open-source large language models (LLMs) using a custom Computer Science QnA dataset. The process involved the preparation and structuring of over 1000 QnA pairs to train and benchmark model outputs using both human and automated metrics. Parameter-efficient techniques and quantization strategies were employed to enhance model performance and streamline the AI training process. • Designed data splits and labeling strategies to facilitate RLHF and evaluation • Labeled outputs using BLEU, ROUGE-L, and semantic similarity for thorough benchmarking • Fine-tuned an LLM using LoRA and QLoRA approaches on curated datasets • Evaluated prompting methods (zero-shot, few-shot, CoT) on labeled QnA data

Not specified

Education

B

Birla Institute of Technology and Science Pilani Hyderabad

Bachelor of Engineering, Computer Science

Bachelor of Engineering
2021 - 2026

Work History

S

Sarvam AI

GenAI Intern

Bengaluru
2026 - Present
A

Amazon

Software Engineering Intern

Hyderabad
2025 - 2025