For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
D
Dana Wortman

Dana Wortman

AI Specialist – Mercor Intelligence (Data Labeling/AI Training)

USA flagRemote, Usa
ExpertMercor

Key Skills

Software

MercorMercor

Top Subject Matter

LLM training
Fine-tuning Domain Expertise
and model evaluation

Top Data Types

VideoVideo
TextText
DocumentDocument

Top Task Types

RLHFRLHF

Freelancer Overview

AI Specialist – Mercor Intelligence (Data Labeling/AI Training). Brings 28+ years of professional experience across complex professional workflows, research, and quality-focused execution. Core strengths include Mercor, Internal, and Proprietary Tooling. Education includes Doctor of Philosophy, University of Maryland, Baltimore County (2014) and Master of Computer Science, University of Virginia (2002). AI-training focus includes data types such as Computer Code, Programming, and Video and labeling workflows including RLHF, Evaluation, and Rating.

Expert

Labeling Experience

Mercor

AI Specialist – Mercor Intelligence (Data Labeling/AI Training)

MercorRLHF
As an AI Specialist at Mercor Intelligence, I generated adversarial training data to fine-tune frontier LLMs. I authored coding problems, reviewed generated solutions, and critiqued chain-of-thought reasoning in model outputs. I ensured high annotation throughput and accuracy in both RLHF and supervised fine-tuning workflows. • Generated adversarial prompts and coding problems for AI training • Performed code review and reasoning-step verification on AI-generated solutions • Maintained high-accuracy, high-throughput annotation pipelines • Focused on RLHF and supervised fine-tuning evaluation for LLMs

As an AI Specialist at Mercor Intelligence, I generated adversarial training data to fine-tune frontier LLMs. I authored coding problems, reviewed generated solutions, and critiqued chain-of-thought reasoning in model outputs. I ensured high annotation throughput and accuracy in both RLHF and supervised fine-tuning workflows. • Generated adversarial prompts and coding problems for AI training • Performed code review and reasoning-step verification on AI-generated solutions • Maintained high-accuracy, high-throughput annotation pipelines • Focused on RLHF and supervised fine-tuning evaluation for LLMs

2025 - Present

Graduate Research Supervisor — UCCS LLM Evaluation Research (AI Training)

Supervised graduate research training LLM-based agents to play video games and produce detailed critiques of gameplay. Developed agent-as-evaluator methodologies to assess usability, mechanics, and design features. Contributed to creating evaluation rubrics for assessing agent reasoning and effectiveness in novel environments. • Trained AI agents using LLMs in gaming environments • Designed and reviewed usability and game design critiques by AI • Created evaluation rubrics for LLM agent assessments • Advanced early methodologies in agent-as-evaluator research

Supervised graduate research training LLM-based agents to play video games and produce detailed critiques of gameplay. Developed agent-as-evaluator methodologies to assess usability, mechanics, and design features. Contributed to creating evaluation rubrics for assessing agent reasoning and effectiveness in novel environments. • Trained AI agents using LLMs in gaming environments • Designed and reviewed usability and game design critiques by AI • Created evaluation rubrics for LLM agent assessments • Advanced early methodologies in agent-as-evaluator research

2021 - 2025

Novelty Detection Consultant — DARPA SAIL-ON Grant/UCCS (RL/AI Evaluation)

Video
Directed a team focusing on RL environment and testbed design for AI agent evaluation and novelty detection. Co-developed custom agents in VizDoom to detect and identify environmental novelties at various abstraction levels. Contributed to systematic assessment and benchmark creation for novel AI agent behaviors. • Led real-time agent evaluation projects in synthetic, game-based environments • Engineered and tested agents’ capability to handle unexpected changes • Established benchmarks for multi-level novelty detection in RL • Supported DARPA-funded open-world AI evaluation methodologies

Directed a team focusing on RL environment and testbed design for AI agent evaluation and novelty detection. Co-developed custom agents in VizDoom to detect and identify environmental novelties at various abstraction levels. Contributed to systematic assessment and benchmark creation for novel AI agent behaviors. • Led real-time agent evaluation projects in synthetic, game-based environments • Engineered and tested agents’ capability to handle unexpected changes • Established benchmarks for multi-level novelty detection in RL • Supported DARPA-funded open-world AI evaluation methodologies

2019 - 2024

Education

U

University of Maryland, Baltimore County

Doctor of Philosophy, Computer Science

Doctor of Philosophy
2010 - 2014
U

University of Virginia

Master of Computer Science, Computer Science

Master of Computer Science
2001 - 2002

Work History

M

Mercor Intelligence

AI Specialist (Contractor)

Remote
2025 - Present
R

Rabid Troll Studios

Senior Engineer and Game Director

Colorado Springs
2016 - Present