For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
Trevor Unland

Trevor Unland

AI Systems Designer - Prompt Engineering

USA flag
Temecula, Usa
$35.00/hrIntermediateData Annotation Tech

Key Skills

Software

Data Annotation TechData Annotation Tech

Top Subject Matter

No subject matter listed

Top Data Types

DocumentDocument
TextText

Top Label Types

Text Generation
RLHF
Evaluation Rating
Red Teaming
Prompt Response Writing SFT

Freelancer Overview

I am an AI prompt engineering and data annotation specialist with hands-on experience evaluating and optimizing large language model outputs across 5,000+ datasets, primarily in natural language processing and reasoning tasks. My expertise includes designing complex evaluation frameworks, managing LLM feedback loops, and applying RLHF principles to ensure data quality and model alignment. I excel at identifying hallucinations, reducing ambiguity, and improving reasoning quality through structured chain-of-thought analysis and systematic pattern recognition. I have collaborated with cross-functional AI research teams to refine annotation guidelines and maintain high data integrity under NDA protocols. My technical skill set spans prompt engineering, dataset evaluation, quality assurance, workflow automation (using tools like HubSpot, Make.com, and Zapier), and basic Python and SQL for data analysis. I am passionate about bridging the gap between AI model capabilities and real-world business applications by delivering high-quality training data and robust annotation processes.

IntermediateEnglish

Labeling Experience

Data Annotation Tech

LLM Training Data Evaluation Specialist

Data Annotation TechTextText GenerationRLHF
Evaluated 5,000+ prompt-response pairs for frontier large language model training, focusing on improving model reasoning, factuality, and safety alignment. Performed comparative evaluations between model variants to identify behavioral improvements and regressions. Specialized in multi-turn conversation analysis, chain-of-thought reasoning assessment, and hallucination detection across business, technical, and general knowledge domains. Conducted red teaming exercises to identify vulnerabilities in AI safety guardrails, including testing for privacy violations, unauthorized data disclosure, and policy compliance failures. Created evaluation rubrics with 30-60+ objective criteria to measure response quality, helpfulness, harmlessness, and instruction-following accuracy. Work contributed to RLHF training pipelines for production AI systems deployed to millions of users.

Evaluated 5,000+ prompt-response pairs for frontier large language model training, focusing on improving model reasoning, factuality, and safety alignment. Performed comparative evaluations between model variants to identify behavioral improvements and regressions. Specialized in multi-turn conversation analysis, chain-of-thought reasoning assessment, and hallucination detection across business, technical, and general knowledge domains. Conducted red teaming exercises to identify vulnerabilities in AI safety guardrails, including testing for privacy violations, unauthorized data disclosure, and policy compliance failures. Created evaluation rubrics with 30-60+ objective criteria to measure response quality, helpfulness, harmlessness, and instruction-following accuracy. Work contributed to RLHF training pipelines for production AI systems deployed to millions of users.

2024 - 2025

Education

A

Arizona State University

Bachelor of Arts, Business Administration

Bachelor of Arts
2022 - 2025

Work History

D

Digital Natives

Business Development Executive

Remote
2025 - 2025
L

Logical Position

Business Development Representative

Remote
2023 - 2024