For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
Divakar Mekala

Divakar Mekala

LLM Systems Engineer & Prompt Evaluator

INDIA flag
Naidupeta, India
$25.00/hrIntermediateOtherClickworkerMercor

Key Skills

Software

Other
ClickworkerClickworker
MercorMercor
Micro1
TelusTelus
AppenAppen

Top Subject Matter

LLM prompt engineering
content generation
workflow automation

Top Data Types

TextText
ImageImage
Computer Code ProgrammingComputer Code Programming

Top Task Types

Classification
Prompt Response Writing SFT
Evaluation Rating
Text Generation
Text Summarization
Transcription
Data Collection
Computer Programming Coding

Freelancer Overview

LLM Systems Engineer & Prompt Evaluator. Brings 3+ years of professional experience across complex professional workflows, research, and quality-focused execution. Core strengths include Internal and Proprietary Tooling. AI-training focus includes data types such as Text and labeling workflows including Evaluation and Rating.

IntermediateTeluguEnglish

Labeling Experience

AI Model Evaluator – Outlier

OtherText
As an LLM Systems Engineer & Prompt Evaluator at Outlier, I evaluated and refined prompts and outputs produced by large language models. This included developing and executing evaluation criteria to assess model output quality, consistency, and relevance for various content generation and communication workflows. I iterated on prompts to improve both outputs and task accuracy through systematic testing and documentation. • Evaluated LLM outputs for quality and alignment with workflow requirements. • Developed structured evaluation criteria and feedback loops for prompt improvement. • Applied RLHF concepts and reviewed generative outputs for reliability. • Documented findings and best practices to ensure reproducibility.

As an LLM Systems Engineer & Prompt Evaluator at Outlier, I evaluated and refined prompts and outputs produced by large language models. This included developing and executing evaluation criteria to assess model output quality, consistency, and relevance for various content generation and communication workflows. I iterated on prompts to improve both outputs and task accuracy through systematic testing and documentation. • Evaluated LLM outputs for quality and alignment with workflow requirements. • Developed structured evaluation criteria and feedback loops for prompt improvement. • Applied RLHF concepts and reviewed generative outputs for reliability. • Documented findings and best practices to ensure reproducibility.

2024 - Present

JobShield AI – Scam Detection System

OtherTextClassification
For the JobShield AI - Scam Detection System, I annotated job descriptions for fraudulent intent. I labeled and classified job postings as suspicious or safe, aiding the model in learning fraud patterns. My data annotation played a vital role in developing robust scam detection capabilities. • Analyzed and structured job post text data • Tagged examples of fraud and scam-related content • Applied specialized annotation guidelines for risk detection • Contributed to the quality of training sets for AI models

For the JobShield AI - Scam Detection System, I annotated job descriptions for fraudulent intent. I labeled and classified job postings as suspicious or safe, aiding the model in learning fraud patterns. My data annotation played a vital role in developing robust scam detection capabilities. • Analyzed and structured job post text data • Tagged examples of fraud and scam-related content • Applied specialized annotation guidelines for risk detection • Contributed to the quality of training sets for AI models

2024 - 2024

AIVerify – LLM Output Evaluation System

OtherTextClassification
With the AIVerify project, I reviewed and classified AI responses based on established trust metrics. I categorized outputs as safe, risky, or incorrect to support effective LLM evaluation. This annotation work improved the reliability of datasets for further AI training. • Assessed trustworthiness of generated text • Labeled outputs for specific categories • Applied scenario-based classification rules • Enhanced quality and safety of AI model datasets

With the AIVerify project, I reviewed and classified AI responses based on established trust metrics. I categorized outputs as safe, risky, or incorrect to support effective LLM evaluation. This annotation work improved the reliability of datasets for further AI training. • Assessed trustworthiness of generated text • Labeled outputs for specific categories • Applied scenario-based classification rules • Enhanced quality and safety of AI model datasets

2024 - 2024

Education

N

N.B.K.R.I.S.T College

Bachelor of Technology, Mechanical Engineering

Bachelor of Technology
2020 - 2024
N

N.B.K.R.I.S.T College

Diploma in Mechanical Engineering, Mechanical Engineering

Diploma in Mechanical Engineering
2017 - 2020

Work History

O

Outlier

LLM Systems Engineer & Prompt Evaluator

Remotely
2024 - Present
O

Outlier AI

AI Model Evaluator

Remotely
2024 - Present