For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
S

Samuel Mata

Generative AI Specialist (Humanities)

Kenya flagSan Jose, CA, Kenya
$50.00/hrExpertOtherLionbridge

Key Skills

Software

Other
LionbridgeLionbridge

Top Subject Matter

Generative AI
Humanities Domain Expertise
LLM Safety

Top Data Types

TextText
AudioAudio
DocumentDocument

Top Task Types

Classification

Freelancer Overview

Generative AI Specialist (Humanities). Brings 5+ years of professional experience across legal operations, contract review, compliance, and structured analysis. Core strengths include Other and Lionbridge. Education includes Doctor of Philosophy, University of Southern California (2023) and Master of Arts, Georgetown University (2022). AI-training focus includes data types such as Text and labeling workflows including Evaluation, Rating, and Classification.

ExpertDutchEnglishSpanish

Labeling Experience

Generative AI Specialist (Humanities)

OtherText
As a Generative AI Specialist at Innodata, I evaluated and refined prompts and responses for large language models. I conducted rubric-based and pairwise output evaluations, focusing on accuracy, tone, style, and safety. I developed and enforced annotation guidelines to ensure consistency and applied adversarial and multilingual evaluation techniques. • Wrote, edited, and improved prompts and AI responses for LLM training and evaluation • Performed preference ranking and quality reviews according to evolving guidelines • Conducted adversarial testing to identify errors, hallucinations, bias, and policy risk • Authored annotation standards and gold labels across multiple languages (EN/ES/NL)

As a Generative AI Specialist at Innodata, I evaluated and refined prompts and responses for large language models. I conducted rubric-based and pairwise output evaluations, focusing on accuracy, tone, style, and safety. I developed and enforced annotation guidelines to ensure consistency and applied adversarial and multilingual evaluation techniques. • Wrote, edited, and improved prompts and AI responses for LLM training and evaluation • Performed preference ranking and quality reviews according to evolving guidelines • Conducted adversarial testing to identify errors, hallucinations, bias, and policy risk • Authored annotation standards and gold labels across multiple languages (EN/ES/NL)

2024 - 2025

Senior Generative AI / NLP Scientist

OtherText
At Adobe, I led human evaluation of AI-generated marketing and business content for correctness, safety, and brand compliance. I designed frameworks for guidelines-based and preference ranking assessments. My work directly informed improvements to model safety and coherence. • Evaluated generative outputs for accuracy, tone, and bias • Enforced project-specific policies and content rules • Developed scoring frameworks combining rubric and comparative methods • Partnered with cross-functional teams to assess model performance

At Adobe, I led human evaluation of AI-generated marketing and business content for correctness, safety, and brand compliance. I designed frameworks for guidelines-based and preference ranking assessments. My work directly informed improvements to model safety and coherence. • Evaluated generative outputs for accuracy, tone, and bias • Enforced project-specific policies and content rules • Developed scoring frameworks combining rubric and comparative methods • Partnered with cross-functional teams to assess model performance

2022 - 2024

NLP / ML Engineer

OtherText
As an NLP/ML Engineer at Grammarly, I reviewed and rated AI-generated rewrites and language suggestions for clarity and accuracy. My contributions included improving rater agreement and fact-sensitive evaluation of AI-generated long-form content. I enhanced annotation protocols through clearer documentation and reviewer feedback. • Conducted quality audits and distinct grading tasks on AI model outputs • Fact-checked AI suggestions for linguistic accuracy and appropriateness • Refined rating and annotation guidelines for remote teams • Focused on English grammar, style, and editorial compliance

As an NLP/ML Engineer at Grammarly, I reviewed and rated AI-generated rewrites and language suggestions for clarity and accuracy. My contributions included improving rater agreement and fact-sensitive evaluation of AI-generated long-form content. I enhanced annotation protocols through clearer documentation and reviewer feedback. • Conducted quality audits and distinct grading tasks on AI model outputs • Fact-checked AI suggestions for linguistic accuracy and appropriateness • Refined rating and annotation guidelines for remote teams • Focused on English grammar, style, and editorial compliance

2020 - 2022

Research Assistant (NLP)

OtherText
While working as an NLP Research Assistant at Allen Institute for AI, I conducted evaluation of generative model outputs focused on commonsense reasoning. I developed and applied adversarial prompt sets to identify weaknesses in AI reasoning. My work contributed to improved detection of model hallucinations and factually incorrect outputs. • Reviewed AI-generated text for logical correctness and coherence • Identified failure modes and designed targeted evaluation prompts • Applied systematic annotation/rating procedures to generative tasks • Reported findings to NLP researchers to enable model improvement

While working as an NLP Research Assistant at Allen Institute for AI, I conducted evaluation of generative model outputs focused on commonsense reasoning. I developed and applied adversarial prompt sets to identify weaknesses in AI reasoning. My work contributed to improved detection of model hallucinations and factually incorrect outputs. • Reviewed AI-generated text for logical correctness and coherence • Identified failure modes and designed targeted evaluation prompts • Applied systematic annotation/rating procedures to generative tasks • Reported findings to NLP researchers to enable model improvement

2019 - 2021
Lionbridge

Computational Linguist / Data Annotator (Contract)

LionbridgeTextClassification
As a Computational Linguist/Data Annotator for Lionbridge AI (TELUS International) & Appen, I completed large-scale annotation, grading, classification, and relevance evaluation tasks. I followed strict quality protocols and engaged in guideline refinement to improve annotation accuracy and inter-rater consistency. My role required applying detailed instructions to support AI language understanding models. • Labeled datasets for classification, grading, and relevance • Performed human-in-the-loop validation and correction of labels • Collaborated with international teams to ensure uniform annotation • Enhanced annotation procedures through continuous feedback and calibration

As a Computational Linguist/Data Annotator for Lionbridge AI (TELUS International) & Appen, I completed large-scale annotation, grading, classification, and relevance evaluation tasks. I followed strict quality protocols and engaged in guideline refinement to improve annotation accuracy and inter-rater consistency. My role required applying detailed instructions to support AI language understanding models. • Labeled datasets for classification, grading, and relevance • Performed human-in-the-loop validation and correction of labels • Collaborated with international teams to ensure uniform annotation • Enhanced annotation procedures through continuous feedback and calibration

2017 - 2019

Education

G

Georgetown University

Master of Arts, Linguistics

Master of Arts
2019 - 2022
N

Northwestern University

Bachelor of Science, Computer Science

Bachelor of Science
2015 - 2018

Work History

A

Adobe

Senior Generative AI / NLP Scientist

San Jose, CA
2022 - 2024
G

Grammarly

NLP / ML Engineer

San Francisco, CA
2020 - 2022