For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
Danesh Kumar

Danesh Kumar

Prompt Engineer - AI and Backend Development

PAKISTAN flag
karachi, Pakistan
$14.00/hrExpertAppenImeritScale AI

Key Skills

Software

AppenAppen
iMeritiMerit
Scale AIScale AI
SuperAnnotateSuperAnnotate
Other

Top Subject Matter

No subject matter listed

Top Data Types

AudioAudio
Computer Code ProgrammingComputer Code Programming
ImageImage
TextText
VideoVideo

Top Label Types

Evaluation Rating
Fine Tuning
Function Calling
Object Detection
Prompt Response Writing SFT
RLHF
Text Summarization

Freelancer Overview

I am a software engineer with strong experience in LLM training, evaluation, and AI reasoning, specializing in the creation and annotation of high-quality datasets for machine learning models. My expertise includes designing SFT datasets, conducting RLHF evaluations, ranking model outputs, and writing clear, reproducible Python and Java code for both training and benchmarking AI systems. I am adept at producing structured annotations, generating metacognitive rationales, and performing deep error analysis to improve model performance on reasoning-heavy tasks. My background also includes building scalable backend systems and delivering data-driven solutions in fintech, giving me a solid foundation in data management and quality assurance for AI training data pipelines.

ExpertEnglishUrdu

Labeling Experience

Scale AI

Prompt Engineer / Python AI Evaluator

Scale AITextEvaluation Rating
As a Prompt Engineer / Python AI Evaluator at Scale AI, I conducted evaluations to benchmark the performance of large language models (LLMs) on reasoning, coding, and language tasks. My responsibilities included ranking AI model responses, writing high-quality rationales, and creating structured annotations for model outputs. I was also responsible for leading SFT dataset creation, ensuring quality assurance and consistency. • Designed and implemented prompt engineering workflows for data evaluation. • Conducted RLHF and output ranking for LLMs. • Curated datasets for supervised fine-tuning (SFT) and evaluations. • Generated metacognitive rationales to improve model reasoning and performance.

As a Prompt Engineer / Python AI Evaluator at Scale AI, I conducted evaluations to benchmark the performance of large language models (LLMs) on reasoning, coding, and language tasks. My responsibilities included ranking AI model responses, writing high-quality rationales, and creating structured annotations for model outputs. I was also responsible for leading SFT dataset creation, ensuring quality assurance and consistency. • Designed and implemented prompt engineering workflows for data evaluation. • Conducted RLHF and output ranking for LLMs. • Curated datasets for supervised fine-tuning (SFT) and evaluations. • Generated metacognitive rationales to improve model reasoning and performance.

2023 - 2024

Education

I

Institute of Business Administration, SIBAU

Bachelor of Science, Computer Science

Bachelor of Science
2019 - 2022

Work History

M

Matrix Systems

Software Engineer II

Karachi
2023 - Present