For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
Digbijoy Chetry

Digbijoy Chetry

Skilled data labeler for AI, ML, and LLM training

India flagTinsukia, India
$30.00/hrExpertAppenAxiom AIData Annotation Tech

Key Skills

Software

AppenAppen
Axiom AI
Data Annotation TechData Annotation Tech
LabelboxLabelbox
RemotasksRemotasks
Scale AIScale AI
TelusTelus

Top Subject Matter

No subject matter listed

Top Data Types

AudioAudio
Computer Code ProgrammingComputer Code Programming
TextText

Top Task Types

Computer Programming Coding
Data Collection
Evaluation Rating
Prompt Response Writing SFT
RLHF

Freelancer Overview

I have worked with leading AI companies including Scale AI, Telus, Mercor and OpenAI (via the Feather platform) on high-end training data projects spanning text, image, video, and multimodal tasks. At Scale AI, I contributed to complex annotation pipelines for autonomous vehicles, computer vision, and NLP applications, ensuring precise, large-scale labeling for production systems. My work also included LLM evaluation and alignment tasks through the Feather platform, where I supported OpenAI’s reinforcement learning from human feedback (RLHF) and model quality assessments. These roles sharpened my ability to manage diverse labeling workflows, maintain high accuracy, and deliver results under tight deadlines. Alongside this applied work, I bring 5+ years of research experience at Google DeepMind and Microsoft Research and am completing my PhD at Harvard University. My background includes advancing large language models, quantum machine learning, and distributed AI systems, giving me a deep understanding of how data quality drives model performance. This combination of hands-on annotation expertise and cutting-edge research allows me to bridge the gap between data labeling and the development of state-of-the-art AI systems.

ExpertHindiEnglishSpanish

Labeling Experience

Appen

Multilingual Data Labeling and AI Training with Appen

AppenComputer Code ProgrammingEntity Ner ClassificationClassification
I have worked on several projects focused on training and evaluating AI models for code generation and developer assistance. My responsibilities included writing high-quality prompts and responses for supervised fine-tuning (SFT), generating reference code solutions, and creating structured function-calling datasets. A major part of my role involved evaluating and ranking model outputs for correctness, efficiency, readability, and security, ensuring that LLMs could produce reliable code across multiple programming languages such as Python, C++, and JavaScript. These projects supported the development of AI coding assistants and copilots by refining their ability to complete functions, debug errors, and follow best practices. I consistently delivered high-quality annotations under strict accuracy and review standards, helping improve the performance and trustworthiness of production-level coding LLMs.

I have worked on several projects focused on training and evaluating AI models for code generation and developer assistance. My responsibilities included writing high-quality prompts and responses for supervised fine-tuning (SFT), generating reference code solutions, and creating structured function-calling datasets. A major part of my role involved evaluating and ranking model outputs for correctness, efficiency, readability, and security, ensuring that LLMs could produce reliable code across multiple programming languages such as Python, C++, and JavaScript. These projects supported the development of AI coding assistants and copilots by refining their ability to complete functions, debug errors, and follow best practices. I consistently delivered high-quality annotations under strict accuracy and review standards, helping improve the performance and trustworthiness of production-level coding LLMs.

2023
Data Annotation Tech

Code Annotation and Evaluation for AI Coding Models

Data Annotation TechComputer Code ProgrammingRLHFEvaluation Rating
I worked on projects annotating and evaluating programming code datasets to train and fine-tune coding LLMs. Tasks included annotating code snippets with metadata and function signatures, classifying solutions by functionality, and writing prompts and responses for supervised fine-tuning (SFT). I also performed evaluation and ranking of model outputs, checking correctness, efficiency, and adherence to coding standards. This work supported the development of AI-powered coding assistants and copilots, ensuring that training datasets were clean, accurate, and aligned with real-world developer needs. My background in Python, C++, and cloud-based ML systems helped me contribute both technical accuracy and domain expertise to these annotation tasks.

I worked on projects annotating and evaluating programming code datasets to train and fine-tune coding LLMs. Tasks included annotating code snippets with metadata and function signatures, classifying solutions by functionality, and writing prompts and responses for supervised fine-tuning (SFT). I also performed evaluation and ranking of model outputs, checking correctness, efficiency, and adherence to coding standards. This work supported the development of AI-powered coding assistants and copilots, ensuring that training datasets were clean, accurate, and aligned with real-world developer needs. My background in Python, C++, and cloud-based ML systems helped me contribute both technical accuracy and domain expertise to these annotation tasks.

2023 - 2025
Scale AI

Large-Scale Data Labeling for AI Model Training

Scale AITextBounding BoxEntity Ner Classification
I contributed to high-end annotation projects with Scale AI, and OpenAI (via the Feather platform). At Scale AI, I performed large-scale image and video annotation for autonomous vehicle systems, including bounding boxes, segmentation, and object tracking, ensuring pixel-level accuracy and consistency across massive datasets. I also worked on text classification and entity recognition tasks, supporting NLP and conversational AI systems. Through the Feather platform, I contributed to OpenAI’s RLHF and evaluation workflows, assessing model outputs, ranking responses, and providing fine-grained feedback to improve alignment and safety of large language models. These projects required strict adherence to quality standards, with multi-stage reviews and high accuracy thresholds, while meeting fast turnaround times.

I contributed to high-end annotation projects with Scale AI, and OpenAI (via the Feather platform). At Scale AI, I performed large-scale image and video annotation for autonomous vehicle systems, including bounding boxes, segmentation, and object tracking, ensuring pixel-level accuracy and consistency across massive datasets. I also worked on text classification and entity recognition tasks, supporting NLP and conversational AI systems. Through the Feather platform, I contributed to OpenAI’s RLHF and evaluation workflows, assessing model outputs, ranking responses, and providing fine-grained feedback to improve alignment and safety of large language models. These projects required strict adherence to quality standards, with multi-stage reviews and high accuracy thresholds, while meeting fast turnaround times.

2022 - 2025

Education

H

Harvard University

MS, Computational Science and Engineering

MS
2019 - 2021
I

IIT Mumbai

BTech, Computer Science Engineering

BTech
2015 - 2019

Work History

G

Google DeepMind

Senior Research Scientist

London
2023 - Present
M

Microsoft Research AI

Research Scientist

Redmond
2021 - 2023