For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
P

Poley Betsy

AI Data Specialist (Meta)

USA flag
Kansas, Usa
$18.00/hrExpertAppenCloudfactoryData Annotation Tech

Key Skills

Software

AppenAppen
CloudFactoryCloudFactory
Data Annotation TechData Annotation Tech
Google Cloud Vertex AIGoogle Cloud Vertex AI

Top Subject Matter

Large Language Models
General Knowledge
Conversational AI

Top Data Types

VideoVideo
AudioAudio
ImageImage

Top Task Types

Bounding Box
Cuboid
Point Key Point
Entity Ner Classification
Classification
Segmentation
Transcription
Evaluation Rating
Computer Programming Coding
Prompt Response Writing SFT

Freelancer Overview

AI Data Specialist (Meta). Core strengths include Internal and Proprietary Tooling. Education includes Doctor of Philosophy, University of California, Berkeley (2024) and Master of Science, University of Washington (2020). AI-training focus includes data types such as Text and labeling workflows including Evaluation and Rating.

ExpertGermanEnglish

Labeling Experience

AI Data Specialist (Meta)

Text
Evaluated large language model (LLM) outputs for conversational quality, factual accuracy, and logical soundness. Used structured taxonomies and standardized rubrics to annotate reasoning, tone, and completeness of AI-generated responses. Delivered pairwise comparisons and fine-grained feedback to support reinforcement learning and model optimization. • Fact-checked model responses using trusted sources. • Annotated strengths, weaknesses, and inconsistencies in AI outputs. • Maintained high inter-annotator agreement and reproducible scoring. • Produced evaluation artifacts to improve deployment readiness.

Evaluated large language model (LLM) outputs for conversational quality, factual accuracy, and logical soundness. Used structured taxonomies and standardized rubrics to annotate reasoning, tone, and completeness of AI-generated responses. Delivered pairwise comparisons and fine-grained feedback to support reinforcement learning and model optimization. • Fact-checked model responses using trusted sources. • Annotated strengths, weaknesses, and inconsistencies in AI outputs. • Maintained high inter-annotator agreement and reproducible scoring. • Produced evaluation artifacts to improve deployment readiness.

2022 - 2024

Data Analyst / AI Data Contributor (Amazon)

Text
Reviewed AI-generated outputs for logical coherence and statistical accuracy across diverse domains. Conducted structured annotation and validation workflows to support model evaluation and performance assurance. Delivered detailed qualitative judgments and comparison tasks in accordance with established best practices. • Applied statistical reasoning to identify errors and inconsistencies. • Supported quality assurance in AI evaluation programs. • Validated content using annotation taxonomies. • Delivered comparative evaluations of multiple outputs.

Reviewed AI-generated outputs for logical coherence and statistical accuracy across diverse domains. Conducted structured annotation and validation workflows to support model evaluation and performance assurance. Delivered detailed qualitative judgments and comparison tasks in accordance with established best practices. • Applied statistical reasoning to identify errors and inconsistencies. • Supported quality assurance in AI evaluation programs. • Validated content using annotation taxonomies. • Delivered comparative evaluations of multiple outputs.

2019 - 2021

AI & Data Research Assistant (Private Research)

Text
Developed and applied annotation guidelines for evaluating experimental AI system outputs. Conducted analytical validation and structured assessment using mathematical and logical frameworks. Participated in early-stage human-in-the-loop workflows to improve model scoring and reliability. • Designed structured annotation guidelines for model assessment. • Applied mathematical reasoning to output validation. • Contributed to research-driven evaluation protocols. • Improved feedback clarity in AI experiment reviews.

Developed and applied annotation guidelines for evaluating experimental AI system outputs. Conducted analytical validation and structured assessment using mathematical and logical frameworks. Participated in early-stage human-in-the-loop workflows to improve model scoring and reliability. • Designed structured annotation guidelines for model assessment. • Applied mathematical reasoning to output validation. • Contributed to research-driven evaluation protocols. • Improved feedback clarity in AI experiment reviews.

2017 - 2019

Education

U

University of California, Berkeley

Doctor of Philosophy, Mathematics

Doctor of Philosophy
2020 - 2024
U

University of Washington

Master of Science, Applied Mathematics

Master of Science
2018 - 2020

Work History

M

Meta

AI Data Specialist

Winfield
2022 - 2024