For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
P

Prabhatha Nissi Guntur

AI Model Output Evaluator (Software Engineer, JerseySTEM)

USA flagNewark, Usa
$35.00/hrIntermediateMindriftMercorMicro1

Key Skills

Software

MindriftMindrift
MercorMercor
Micro1

Top Subject Matter

AI/LLM Model Evaluation
Programming/Computer Science Education AI Evaluation
LLM Output Evaluation

Top Data Types

TextText
Computer Code ProgrammingComputer Code Programming
AudioAudio

Top Task Types

Computer Programming/CodingComputer Programming/Coding
Evaluation/RatingEvaluation/Rating
Text SummarizationText Summarization
Data CollectionData Collection
Function CallingFunction Calling
Question AnsweringQuestion Answering
TranscriptionTranscription
Object DetectionObject Detection
Text GenerationText Generation

Freelancer Overview

AI Model Output Evaluator (Software Engineer, JerseySTEM). Brings 8+ years of professional experience across complex professional workflows, research, and quality-focused execution. Core strengths include Internal and Proprietary Tooling. Education includes Master of Science, New Jersey Institute of Technology (2025) and Bachelor of Technology, Lovely Professional University (2021). AI-training focus includes data types such as Text, Computer Code, and Programming and labeling workflows including Evaluation and Rating.

IntermediateEnglishTeluguHindi

Labeling Experience

AI Model Output Evaluator (Software Engineer, JerseySTEM)

Text
I performed structured evaluation of AI-generated outputs to identify edge cases, reasoning gaps, and failures. This involved validating outputs from AI-assisted workflows and ensuring reliability and usability of results. My efforts contributed directly to improving data quality standards for large-scale pipelines. • Designed and implemented validation checks for anomaly detection. • Improved output reliability through comprehensive evaluations. • Engaged in collaborative refinement of data validation processes. • Focused on accuracy, logical consistency, and completeness.

I performed structured evaluation of AI-generated outputs to identify edge cases, reasoning gaps, and failures. This involved validating outputs from AI-assisted workflows and ensuring reliability and usability of results. My efforts contributed directly to improving data quality standards for large-scale pipelines. • Designed and implemented validation checks for anomaly detection. • Improved output reliability through comprehensive evaluations. • Engaged in collaborative refinement of data validation processes. • Focused on accuracy, logical consistency, and completeness.

2025 - Present

Backend AI Output Validator (Project)

Text
I built and operated validation layers to filter incorrect and inconsistent responses in AI-generated structured backend outputs. My work integrated LLM services to automate prompt validation for backend service workflows. This project improved backend response reliability by applying structured AI output validation. • Designed validation layers for backend AI outputs. • Integrated LLM APIs into workflow automation. • Filtered and flagged inconsistent or invalid responses. • Supported trusted backend pipeline generation using LLM output checks.

I built and operated validation layers to filter incorrect and inconsistent responses in AI-generated structured backend outputs. My work integrated LLM services to automate prompt validation for backend service workflows. This project improved backend response reliability by applying structured AI output validation. • Designed validation layers for backend AI outputs. • Integrated LLM APIs into workflow automation. • Filtered and flagged inconsistent or invalid responses. • Supported trusted backend pipeline generation using LLM output checks.

2025 - 2025

LLM Output Evaluator (Project)

Text
I evaluated AI-generated responses for accuracy, logical correctness, and hallucination detection using a custom Python-based evaluation system. The work involved designing response scoring and prompt refinement to improve output quality and consistency. My evaluation efforts supported LLM output reliability and AI prompt iteration. • Performed hands-on review of textual LLM outputs. • Applied scoring rubrics for evaluation quality. • Identified hallucinations and logical errors in AI texts. • Refined prompts and scoring systems for LLMs.

I evaluated AI-generated responses for accuracy, logical correctness, and hallucination detection using a custom Python-based evaluation system. The work involved designing response scoring and prompt refinement to improve output quality and consistency. My evaluation efforts supported LLM output reliability and AI prompt iteration. • Performed hands-on review of textual LLM outputs. • Applied scoring rubrics for evaluation quality. • Identified hallucinations and logical errors in AI texts. • Refined prompts and scoring systems for LLMs.

2025 - 2025

Code Output Evaluator (Graduate Teaching Assistant, NJIT)

I evaluated student programming assignment outputs for correctness, logical reasoning, and completeness. This included identifying incorrect assumptions and edge case failures within computer code. I built Python tools to systematize evaluation and feedback for AI-related coursework. • Performed structured code review and issue identification. • Standardized grading and evaluation through scripting. • Provided mentorship on debugging and code validation. • Enhanced overall solution quality involving programming logic.

I evaluated student programming assignment outputs for correctness, logical reasoning, and completeness. This included identifying incorrect assumptions and edge case failures within computer code. I built Python tools to systematize evaluation and feedback for AI-related coursework. • Performed structured code review and issue identification. • Standardized grading and evaluation through scripting. • Provided mentorship on debugging and code validation. • Enhanced overall solution quality involving programming logic.

2024 - 2024

Education

N

New Jersey Institute of Technology

Master of Science, Computer Science

Master of Science
2023 - 2025
L

Lovely Professional University

Bachelor of Technology, Computer Science

Bachelor of Technology
2017 - 2021

Work History

J

JerseySTEM

Software Engineer

Newark
2025 - Present
N

New Jersey Institute Of Technology

Graduate Teaching Assistant

Newark
2024 - 2024