For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
S
Suman Medhi

Suman Medhi

AI Evaluator & Frontend Engineer | RLHF | LLM Code Assessment

India flagGuwahati, India
$10.00/hrIntermediateLabelboxMercorMindrift

Key Skills

Software

LabelboxLabelbox
MercorMercor
MindriftMindrift
Scale AIScale AI
RemotasksRemotasks
TolokaToloka
Data Annotation TechData Annotation Tech
AppenAppen
ClickworkerClickworker

Top Subject Matter

Model-generated front-end web development code
Full-Stack Web Application Development
AI Model Evaluation & Prompt Engineering

Top Data Types

TextText
DocumentDocument
Computer Code ProgrammingComputer Code Programming

Top Task Types

RLHFRLHF
Evaluation/RatingEvaluation/Rating
TranscriptionTranscription
Computer Programming/CodingComputer Programming/Coding
Prompt + Response Writing (SFT)Prompt + Response Writing (SFT)
Question AnsweringQuestion Answering

Freelancer Overview

AI Evaluator and Frontend Engineer with hands-on RLHF experience at Outlier AI, where I assessed 500+ model-generated code samples for correctness, UI fidelity, and best-practice compliance. Skilled in prompt engineering, rubric design, and LLM code quality assessment. Frontend background in MERN Stack, Next.js, TypeScript, and Three.js.

IntermediateEnglishHindiAssamese

Labeling Experience

AI Code Evaluator at Outlier AI

RLHF
As an AI Code Evaluator at Outlier AI, I appraised machine-generated code samples for correctness and quality. I developed scoring rubrics and prompt specifications contributing to enhanced LLM benchmark performance. My efforts supported both model improvement and standardization of evaluation methods. • Evaluated 500+ HTML, CSS, and JavaScript code samples for accuracy and compliance with best practices. • Authored detailed rubrics for binary and multi-criteria rating by distributed teams. • Designed prompts to uncover edge cases and reduce recurrent code generation errors. • Collaborated asynchronously in a quality-focused, international review environment.

As an AI Code Evaluator at Outlier AI, I appraised machine-generated code samples for correctness and quality. I developed scoring rubrics and prompt specifications contributing to enhanced LLM benchmark performance. My efforts supported both model improvement and standardization of evaluation methods. • Evaluated 500+ HTML, CSS, and JavaScript code samples for accuracy and compliance with best practices. • Authored detailed rubrics for binary and multi-criteria rating by distributed teams. • Designed prompts to uncover edge cases and reduce recurrent code generation errors. • Collaborated asynchronously in a quality-focused, international review environment.

2024 - 2024

Education

P

Pragjyotish College

Bachelor of Computer Applications, Computer Applications

Bachelor of Computer Applications
2021 - 2024

Work History

F

Freelance

Full-Stack Developer

Guwahati
2023 - Present