For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
J

Jema Cook

AI Content Evaluator (Freelance Contractor)

USA flagDenver, Usa
ExpertLabelbox

Key Skills

Software

LabelboxLabelbox

Top Subject Matter

Large Language Models
General Knowledge
AI Systems

Top Data Types

TextText
DocumentDocument

Top Task Types

Prompt + Response Writing (SFT)Prompt + Response Writing (SFT)

Freelancer Overview

AI Content Evaluator (Freelance Contractor). Brings 4+ years of professional experience across legal operations, contract review, compliance, and structured analysis. Core strengths include Labelbox. Education includes Bachelor of Arts, University of Colorado Denver (2022). AI-training focus includes data types such as Text and labeling workflows including Evaluation, Rating, and Prompt + Response Writing (SFT).

Expert

Labeling Experience

Labelbox

AI Content Evaluator (Freelance Contractor)

LabelboxText
I worked as an AI Content Evaluator, assessing AI-generated responses for accuracy and clarity across a variety of topics. My responsibilities included detailed feedback, annotation for training data quality, and supporting prompt and evaluation framework development. This role required daily use of advanced AI tools and a structured approach to improving large language models. • Evaluated AI outputs for reliability, factual correctness, and coherence. • Annotated text data consistently to maintain high training standards. • Collaborated in identifying logical inconsistencies and gaps in AI reasoning. • Provided prompt engineering contributions to optimize model interactions.

I worked as an AI Content Evaluator, assessing AI-generated responses for accuracy and clarity across a variety of topics. My responsibilities included detailed feedback, annotation for training data quality, and supporting prompt and evaluation framework development. This role required daily use of advanced AI tools and a structured approach to improving large language models. • Evaluated AI outputs for reliability, factual correctness, and coherence. • Annotated text data consistently to maintain high training standards. • Collaborated in identifying logical inconsistencies and gaps in AI reasoning. • Provided prompt engineering contributions to optimize model interactions.

2024 - Present
Labelbox

Research Assistant

LabelboxText
As a Research Assistant, I participated in processes related to reviewing and improving data quality for research and analysis. My tasks included content validation, development of structured evaluation criteria, and documentation of findings. I also contributed to initiatives focused on enhancing data review protocols and annotation methods. • Evaluated and validated information for accuracy and completeness in datasets. • Supported process improvement projects related to data annotation and quality checking. • Developed structured criteria for subjective content assessment. • Ensured clear communication of insights and annotation outcomes to the team.

As a Research Assistant, I participated in processes related to reviewing and improving data quality for research and analysis. My tasks included content validation, development of structured evaluation criteria, and documentation of findings. I also contributed to initiatives focused on enhancing data review protocols and annotation methods. • Evaluated and validated information for accuracy and completeness in datasets. • Supported process improvement projects related to data annotation and quality checking. • Developed structured criteria for subjective content assessment. • Ensured clear communication of insights and annotation outcomes to the team.

2022 - 2023
Labelbox

Prompt and Task Refinement Initiative

LabelboxTextPrompt Response Writing SFT
I contributed to a Prompt and Task Refinement Initiative aimed at strengthening prompt design for language model outputs. This involved reviewing prompts and task instructions, identifying weaknesses, and suggesting refinements for consistent model performance. The main focus was on improving the reliability and quality of responses generated by AI systems. • Analyzed and edited instructions and prompts for clarity and effectiveness. • Collaborated with team members on updated and improved workflow standards. • Ensured prompt engineering processes were tested and iteratively improved. • Gathered structured feedback on prompt effectiveness for model retraining.

I contributed to a Prompt and Task Refinement Initiative aimed at strengthening prompt design for language model outputs. This involved reviewing prompts and task instructions, identifying weaknesses, and suggesting refinements for consistent model performance. The main focus was on improving the reliability and quality of responses generated by AI systems. • Analyzed and edited instructions and prompts for clarity and effectiveness. • Collaborated with team members on updated and improved workflow standards. • Ensured prompt engineering processes were tested and iteratively improved. • Gathered structured feedback on prompt effectiveness for model retraining.

Not specified
Labelbox

AI Model Response Evaluation Project

LabelboxText
I was involved in an AI Model Response Evaluation Project, which focused on evaluating outputs from large language models. Activities included annotating and assessing the accuracy, tone, and usefulness of generated responses, along with systematic documentation of issues. The work aimed to improve overall data and training quality for AI systems. • Systematically tested and reviewed LLM responses in an annotation environment. • Provided structured feedback to inform AI model adjustments. • Contributed to the generation and upkeep of high-quality textual training data. • Reported findings through detailed documentation and ongoing communication.

I was involved in an AI Model Response Evaluation Project, which focused on evaluating outputs from large language models. Activities included annotating and assessing the accuracy, tone, and usefulness of generated responses, along with systematic documentation of issues. The work aimed to improve overall data and training quality for AI systems. • Systematically tested and reviewed LLM responses in an annotation environment. • Provided structured feedback to inform AI model adjustments. • Contributed to the generation and upkeep of high-quality textual training data. • Reported findings through detailed documentation and ongoing communication.

Not specified

Education

U

University of Colorado Denver

Bachelor of Arts, Communications

Bachelor of Arts
2018 - 2022

Work History

N

N/A

Research Assistant

Denver
2022 - 2023
N

N/A

Academic Tutor

Aurora
2021 - 2022