For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
Sonee Hamilton

Sonee Hamilton

AI Trainer & Data Annotation Specialist - Large Language Models

USA flag
Tempe, Usa
$30.00/hrExpertAppenRemotasksTelus

Key Skills

Software

AppenAppen
RemotasksRemotasks
TelusTelus

Top Subject Matter

No subject matter listed

Top Data Types

TextText

Top Label Types

Evaluation Rating

Freelancer Overview

I am an experienced AI Trainer and Data Annotation Specialist with a strong background in improving the quality and safety of large language models. My expertise includes writing and refining prompts, evaluating and ranking AI-generated content, and conducting rigorous fact-checking and research across diverse domains. I am adept at identifying bias, logical errors, and hallucinations through adversarial testing, and I consistently apply complex project guidelines to ensure high standards of accuracy and compliance. With a keen analytical mindset and a commitment to quality assurance, I thrive in fast-paced, remote environments and am skilled at providing structured feedback to enhance model performance. My experience spans roles in AI training, data annotation, content evaluation, and linguistic assessment, giving me a comprehensive understanding of the end-to-end data labeling process.

ExpertEnglish

Labeling Experience

Appen

AI Trainer & Data Annotator – Outlier AI (Remote)

AppenTextEvaluation Rating
As an AI Trainer & Data Annotator at Outlier AI, I labeled and evaluated AI-generated text responses for accuracy, reasoning, and safety improvements. I developed and refined prompts, created ideal responses for fine-tuning datasets, and ranked model outputs according to established evaluation metrics. My daily work emphasized rubric-based annotation, bias detection, and comprehensive quality assurance within a fast-paced remote environment. • Labeled AI-generated text responses for SFT and RLHF purposes. • Ranked responses based on coherence, correctness, and policy compliance. • Identified hallucinations, bias, and logical inconsistencies in model outputs. • Provided structured annotation feedback to align with detailed rubrics.

As an AI Trainer & Data Annotator at Outlier AI, I labeled and evaluated AI-generated text responses for accuracy, reasoning, and safety improvements. I developed and refined prompts, created ideal responses for fine-tuning datasets, and ranked model outputs according to established evaluation metrics. My daily work emphasized rubric-based annotation, bias detection, and comprehensive quality assurance within a fast-paced remote environment. • Labeled AI-generated text responses for SFT and RLHF purposes. • Ranked responses based on coherence, correctness, and policy compliance. • Identified hallucinations, bias, and logical inconsistencies in model outputs. • Provided structured annotation feedback to align with detailed rubrics.

2024
Appen

Linguistic Evaluator – RWS (Remote)

AppenTextEvaluation Rating
As a Linguistic Evaluator at RWS, I reviewed machine-generated text content for linguistic quality and contextual accuracy. My responsibilities included labeling inconsistencies, ambiguity, bias, and factual misalignment according to strict annotation standards. This work contributed directly to the linguistic performance and reliability of AI systems. • Evaluated the linguistic and contextual quality of AI-generated text. • Labeled bias, inconsistencies, and ambiguous language in outputs. • Ensured compliance with detailed annotation and quality guidelines. • Delivered data supporting robust AI language model development.

As a Linguistic Evaluator at RWS, I reviewed machine-generated text content for linguistic quality and contextual accuracy. My responsibilities included labeling inconsistencies, ambiguity, bias, and factual misalignment according to strict annotation standards. This work contributed directly to the linguistic performance and reliability of AI systems. • Evaluated the linguistic and contextual quality of AI-generated text. • Labeled bias, inconsistencies, and ambiguous language in outputs. • Ensured compliance with detailed annotation and quality guidelines. • Delivered data supporting robust AI language model development.

2024 - 2025
Telus

Search Quality Rater & AI Evaluator – TELUS International AI (Remote)

TelusTextEvaluation Rating
As a Search Quality Rater & AI Evaluator with TELUS International AI, I assessed the relevance, accuracy, and utility of search results and AI-generated outputs. I conducted web research to validate information and adhered strictly to structured rating guidelines. My work provided actionable feedback used for search engine and AI system improvements. • Rated AI outputs and search results for user intent and usefulness. • Performed web research to ensure accuracy and content validity. • Maintained consistency in rubric-based ratings across evaluations. • Supported enhancements to data-driven search and AI processes.

As a Search Quality Rater & AI Evaluator with TELUS International AI, I assessed the relevance, accuracy, and utility of search results and AI-generated outputs. I conducted web research to validate information and adhered strictly to structured rating guidelines. My work provided actionable feedback used for search engine and AI system improvements. • Rated AI outputs and search results for user intent and usefulness. • Performed web research to ensure accuracy and content validity. • Maintained consistency in rubric-based ratings across evaluations. • Supported enhancements to data-driven search and AI processes.

2024 - 2025
Remotasks

AI Data Analyst & Content Labeler – Remotasks (Remote)

RemotasksTextEvaluation Rating
As an AI Data Analyst & Content Labeler at Remotasks, I contributed to the development and improvement of machine learning models by annotating AI-generated text. My tasks included applying evolving labeling guidelines, identifying policy violations, and ensuring factual and reasoning quality. The role required consistent adherence to productivity and quality benchmarks across multiple projects. • Annotated AI outputs for factual accuracy and guideline alignment. • Flagged harmful content, policy violations, and safety risks. • Adapted to varying project instructions and annotation standards. • Delivered labeled datasets used for model training and evaluation.

As an AI Data Analyst & Content Labeler at Remotasks, I contributed to the development and improvement of machine learning models by annotating AI-generated text. My tasks included applying evolving labeling guidelines, identifying policy violations, and ensuring factual and reasoning quality. The role required consistent adherence to productivity and quality benchmarks across multiple projects. • Annotated AI outputs for factual accuracy and guideline alignment. • Flagged harmful content, policy violations, and safety risks. • Adapted to varying project instructions and annotation standards. • Delivered labeled datasets used for model training and evaluation.

2022 - 2024

Education

N

N/A

Bachelor of Science, Construction Management and Technology

Bachelor of Science
2023 - 2025

Work History

O

Outlier

Data analyst

Tempe
2024 - Present