For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
N
Nathan Mckinley-Pace

Nathan Mckinley-Pace

AI Data Annotator - Mathematics, Computer Science, and Life Sciences

USA flagHerndon, Usa
$40.00/hrEntry LevelOther

Key Skills

Software

Other

Top Subject Matter

Mathematics
Computer Science
Computational Biology

Top Data Types

ImageImage
TextText
Computer Code ProgrammingComputer Code Programming

Top Task Types

Evaluation/RatingEvaluation/Rating
RLHFRLHF
Red TeamingRed Teaming

Freelancer Overview

I bring extensive expertise in mathematics, computer science, and biology across various workflows, along with experience in evaluation of AI responses and adversarial prompt creation. My education includes an in-progress Master of Science in Computational Biology from George Mason University (2023-present) and a Bachelor of Science in Mathematics and Computer Science from GMU (2022).

Entry LevelEnglish

Labeling Experience

LLM Preference Rater

TextEvaluation Rating
Conducted pairwise ranking of LLM responses to STEM-related prompts, choosing the stronger response and explaining the preference decision in writing. Applied rubric-based standards for factual correctness, instruction adherence, clarity, and tone/style. Produced consistent evaluation judgments in a distributed, quality-sensitive annotation environment.

Conducted pairwise ranking of LLM responses to STEM-related prompts, choosing the stronger response and explaining the preference decision in writing. Applied rubric-based standards for factual correctness, instruction adherence, clarity, and tone/style. Produced consistent evaluation judgments in a distributed, quality-sensitive annotation environment.

2026 - Present

RLHF Data Contributor

TextRLHF
Evaluated pairs of LLM responses to STEM-related user prompts in mathematics, computer science, and biology. Rated outputs using detailed rubrics covering instruction following, factual correctness, clarity, and style/tone; selected the stronger response; and provided written justifications for pairwise preferences. Edited responses to better satisfy rubric and quality requirements.

Evaluated pairs of LLM responses to STEM-related user prompts in mathematics, computer science, and biology. Rated outputs using detailed rubrics covering instruction following, factual correctness, clarity, and style/tone; selected the stronger response; and provided written justifications for pairwise preferences. Edited responses to better satisfy rubric and quality requirements.

2026 - Present

LLM Preference Rater

TextEvaluation Rating
Performed pairwise evaluation of LLM responses to STEM-related prompts, primarily in mathematics, computer science, and biology. Compared two outputs, selected the stronger response based on instruction following, correctness, clarity, and tone, and wrote concise justifications explaining the preference decision.

Performed pairwise evaluation of LLM responses to STEM-related prompts, primarily in mathematics, computer science, and biology. Compared two outputs, selected the stronger response based on instruction following, correctness, clarity, and tone, and wrote concise justifications explaining the preference decision.

2026 - Present

LLM Preference Rater

TextEvaluation Rating
Evaluated pairs of LLM-generated responses to STEM prompts using structured rubrics for instruction following, correctness, clarity, and tone/style. Selected the better response, justified the ranking in writing, and in some cases edited responses to improve quality and policy/rubric alignment. This work required careful factual checking, consistency under changing guidelines, and strong attention to written reasoning quality.

Evaluated pairs of LLM-generated responses to STEM prompts using structured rubrics for instruction following, correctness, clarity, and tone/style. Selected the better response, justified the ranking in writing, and in some cases edited responses to improve quality and policy/rubric alignment. This work required careful factual checking, consistency under changing guidelines, and strong attention to written reasoning quality.

2026 - Present

Multimodal Adversarial Prompt Contributor

TextRed Teaming
Designed adversarial STEM-focused image-plus-prompt tasks intended to expose failure modes in multimodal LLMs. Created challenging examples in mathematics and biology to test reasoning, visual interpretation, factual reliability, and robustness to deceptive or ambiguous inputs.

Designed adversarial STEM-focused image-plus-prompt tasks intended to expose failure modes in multimodal LLMs. Created challenging examples in mathematics and biology to test reasoning, visual interpretation, factual reliability, and robustness to deceptive or ambiguous inputs.

2025 - 2026

Education

G

George Mason University

Bachelor of Science, Computer Science and Mathematics

Bachelor of Science
2014 - 2022
M

Mount St. Mary's University

Non-degree Student, Mathematics and Computer Science

Non-degree Student
2008 - 2011

Work History

N

National Institutes of Health

Research Intern

Bethesda
2024 - 2024
S

Self-Employed

Math and Computer Science Tutor

Fairfax
2015 - 2024