For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
T
Tanner Cline

Tanner Cline

AI Model Evaluation & Technical Annotation (Contract)

USA flagHonolulu, Usa
$50.00/hrIntermediateLabelboxData Annotation Tech

Key Skills

Software

LabelboxLabelbox
Data Annotation TechData Annotation Tech

Top Subject Matter

AI Model Evaluation
Software Engineering
Code Generation

Top Data Types

DocumentDocument
TextText

Top Task Types

RLHFRLHF
Computer Programming/CodingComputer Programming/Coding
Evaluation/RatingEvaluation/Rating
Function CallingFunction Calling

Freelancer Overview

AI Model Evaluation & Technical Annotation (Contract). Brings 21+ years of professional experience across legal operations, contract review, compliance, and structured analysis. Core strengths include Internal and Proprietary Tooling. Education includes Bachelor of Science, University of Illinois, Springfield (2010). AI-training focus includes data types such as Computer Code and Programming and labeling workflows including Evaluation and Rating.

IntermediateEnglish

Labeling Experience

AI Model Evaluation & Technical Annotation (Contract)

Conducted expert-level evaluation and technical annotation for AI coding agents in production open-source codebases. Rated model outputs, steered agentic coding sessions, and provided PR-level feedback to shape model training data and performance. Engineered Docker-based benchmarking pipelines and authored precise issue descriptions to optimize training signals and methodology quality. • Evaluated agent outputs across seven code-grounded quality axes, with written rationales. • Guided multi-turn agentic coding sessions, enforcing codebase and testing conventions. • Developed benchmarking pipelines including test harnesses and patch validation. • Improved agent failure analysis and documented reusable evaluation methodology.

Conducted expert-level evaluation and technical annotation for AI coding agents in production open-source codebases. Rated model outputs, steered agentic coding sessions, and provided PR-level feedback to shape model training data and performance. Engineered Docker-based benchmarking pipelines and authored precise issue descriptions to optimize training signals and methodology quality. • Evaluated agent outputs across seven code-grounded quality axes, with written rationales. • Guided multi-turn agentic coding sessions, enforcing codebase and testing conventions. • Developed benchmarking pipelines including test harnesses and patch validation. • Improved agent failure analysis and documented reusable evaluation methodology.

2025 - Present

Education

U

University of Illinois, Springfield

Bachelor of Science, Computer Science

Bachelor of Science
2010 - 2010

Work History

C

College Board

Engineer IV, Platform Operations

Honolulu
2025 - Present
A

Amazon

System Development Engineer II

Seattle
2023 - 2025