For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
A
Audrey Rasch

Audrey Rasch

Software Developer & AI Model Evaluation Analyst.

USA flagLittle Rock, Usa
$10.00/hrIntermediateClickworkerCloudfactoryData Annotation Tech

Key Skills

Software

ClickworkerClickworker
CloudFactoryCloudFactory
Data Annotation TechData Annotation Tech
HumanaticHumanatic
HiveMindHiveMind
LabelboxLabelbox
LabelImgLabelImg
Label StudioLabel Studio
LionbridgeLionbridge
MercorMercor
Mighty AIMighty AI
MindriftMindrift
OneFormaOneForma
RemotasksRemotasks
Scale AIScale AI
TelusTelus
TolokaToloka
Google Cloud Vertex AIGoogle Cloud Vertex AI
AppenAppen
AWS SageMakerAWS SageMaker

Top Subject Matter

Artificial Intelligence & Machine Learning
Data Annotation & Model Evaluation
Software Development

Top Data Types

Computer Code ProgrammingComputer Code Programming
ImageImage
TextText

Top Task Types

Evaluation/RatingEvaluation/Rating
Computer Programming/CodingComputer Programming/Coding
Prompt + Response Writing (SFT)Prompt + Response Writing (SFT)
TranscriptionTranscription
Data CollectionData Collection
Text SummarizationText Summarization
Text GenerationText Generation
Question AnsweringQuestion Answering
Object DetectionObject Detection
ClassificationClassification
SegmentationSegmentation
Bounding BoxBounding Box
PolygonPolygon
PolylinePolyline
Point/Key PointPoint/Key Point
Entity (NER) ClassificationEntity (NER) Classification
Function CallingFunction Calling
Red TeamingRed Teaming
Fine-tuningFine-tuning
RLHFRLHF
CuboidCuboid

Freelancer Overview

Software Developer. Brings 6+ years of professional experience across complex professional workflows, research, and quality-focused execution. Education includes Associate Degree in Computer Science, University of Arkansas at Little Rock (2023) and High School Diploma, Little Rock Central High School (2021).

IntermediateEnglishSwahili

Labeling Experience

IMAGE EVALUATION

ImageEvaluation Rating
In this project, I compared two images generated from the same prompt to evaluate their quality, realism, and alignment with the intended output. I assessed each image based on visual accuracy, detail, composition, and overall relevance, while also determining whether the images appeared real or AI-generated by examining artifacts, inconsistencies, and unnatural elements. I identified differences in how key features were represented, noted any distortions or synthetic patterns, and determined which image better captured a natural, realistic appearance. I then provided a clear, evidence-based rationale explaining why one image was more effective, highlighting specific strengths and weaknesses in each. This process helped identify which model produced more convincing and visually coherent results.

In this project, I compared two images generated from the same prompt to evaluate their quality, realism, and alignment with the intended output. I assessed each image based on visual accuracy, detail, composition, and overall relevance, while also determining whether the images appeared real or AI-generated by examining artifacts, inconsistencies, and unnatural elements. I identified differences in how key features were represented, noted any distortions or synthetic patterns, and determined which image better captured a natural, realistic appearance. I then provided a clear, evidence-based rationale explaining why one image was more effective, highlighting specific strengths and weaknesses in each. This process helped identify which model produced more convincing and visually coherent results.

2026 - 2026

SOFTWARE DEV/AI PROPMT ENGINEER

Computer Code ProgrammingComputer Programming Coding
In this role, I worked as a software developer and AI prompt engineer, where I designed, tested, and refined prompts to improve the quality, accuracy, and reliability of AI-generated outputs across text, audio, and image tasks. I evaluated model responses through structured comparisons, identified issues such as ambiguity, inconsistencies, and artifacts, and iteratively optimized prompts to achieve more natural, precise, and user-aligned results. I also collaborated with evaluation frameworks to assess performance, provided clear rationales for model behavior, and contributed to improving overall system efficiency and output quality.

In this role, I worked as a software developer and AI prompt engineer, where I designed, tested, and refined prompts to improve the quality, accuracy, and reliability of AI-generated outputs across text, audio, and image tasks. I evaluated model responses through structured comparisons, identified issues such as ambiguity, inconsistencies, and artifacts, and iteratively optimized prompts to achieve more natural, precise, and user-aligned results. I also collaborated with evaluation frameworks to assess performance, provided clear rationales for model behavior, and contributed to improving overall system efficiency and output quality.

2025 - 2026

prompt response evaluation

TextPrompt Response Writing SFT
In this project, I evaluated AI-generated responses by comparing two outputs produced from the same prompt and determining which one performs better. I assess each response based on key criteria such as relevance, accuracy, clarity, completeness, and overall usefulness, then make a reasoned judgment on which response best meets the user’s intent. My role involved not just selecting the better answer, but also providing a concise, evidence-based rationale that explains the differences in quality, highlights any issues (such as vague wording or missing details), and demonstrates why one response is more effective. This process helps improve AI performance by identifying strengths and weaknesses in how prompts are handled.

In this project, I evaluated AI-generated responses by comparing two outputs produced from the same prompt and determining which one performs better. I assess each response based on key criteria such as relevance, accuracy, clarity, completeness, and overall usefulness, then make a reasoned judgment on which response best meets the user’s intent. My role involved not just selecting the better answer, but also providing a concise, evidence-based rationale that explains the differences in quality, highlights any issues (such as vague wording or missing details), and demonstrates why one response is more effective. This process helps improve AI performance by identifying strengths and weaknesses in how prompts are handled.

2025 - 2026

AUDIO EVALUATION

AudioTranscription
In this project, I compared transcriptions of audio generated by different AI models, focusing on how tonal ambiguity and audio quality issues affected clarity and accuracy. I listened carefully for problems such as audible glitches, sudden changes in volume or timbre, and heavy artifacts that distorted speech or altered meaning, then assessed how well each transcription reflected the intended message despite these flaws. I evaluated which model produced the most natural, consistent, and intelligible output, and provided a concise rationale highlighting where specific issues occurred and how they impacted overall quality. This process helped identify which systems handled speech generation more reliably and produced transcriptions that better matched real human communication.

In this project, I compared transcriptions of audio generated by different AI models, focusing on how tonal ambiguity and audio quality issues affected clarity and accuracy. I listened carefully for problems such as audible glitches, sudden changes in volume or timbre, and heavy artifacts that distorted speech or altered meaning, then assessed how well each transcription reflected the intended message despite these flaws. I evaluated which model produced the most natural, consistent, and intelligible output, and provided a concise rationale highlighting where specific issues occurred and how they impacted overall quality. This process helped identify which systems handled speech generation more reliably and produced transcriptions that better matched real human communication.

2024 - 2026

Education

A

AIDRALABS

CERTIFICATE OF MERIT, ADVANCED AI CONTENT MODERATION

CERTIFICATE OF MERIT
2026 - 2026
U

University of Arkansas at Little Rock

Associate Degree in Computer Science, Computer Science

Associate Degree in Computer Science
2021 - 2023

Work History

P

PALCO

Software Developer

Little Rock
2023 - Present
O

Orsanna

Junior Developer

Little Rock
2021 - 2023