For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
J

Joyee Chen

Technical Staff – Synthetic Data Generation and Model Fine-Tuning

USA flagBerkeley, Usa
Entry Level

Key Skills

Software

No software listed

Top Subject Matter

AI alignment
LLM safety
synthetic data generation

Top Data Types

TextText

Top Task Types

Fine Tuning

Freelancer Overview

Technical Staff – Synthetic Data Generation and Model Fine-Tuning. Brings 3+ years of professional experience across complex professional workflows, research, and quality-focused execution. Core strengths include Internal and Proprietary Tooling. Education includes Bachelor of Science, UC Berkeley (2024). AI-training focus includes data types such as Text and labeling workflows including Fine-tuning.

Entry Level

Labeling Experience

Technical Staff – Synthetic Data Generation and Model Fine-Tuning

TextFine Tuning
As a technical staff member at CaML, I contributed to robustly internalizing open-minded and compassionate values in LLMs using synthetic data training. My responsibilities included generating and curating diverse synthetic datasets, executing model fine-tuning, and designing novel evaluations of model outputs. I developed scalable pipelines and ensured the quality and diversity of training data in a mission-critical AI alignment context. • Generated synthetic text data in batches of 1,000–3,000 to train large language models • Managed and performed fine-tuning of open source 8B parameter models using Unsloth • Developed and administered rigorous 20+ question evaluations for model behavior toward animals and digital minds • Implemented additional evaluation and attack methodologies, adapting to new research agendas as needed.

As a technical staff member at CaML, I contributed to robustly internalizing open-minded and compassionate values in LLMs using synthetic data training. My responsibilities included generating and curating diverse synthetic datasets, executing model fine-tuning, and designing novel evaluations of model outputs. I developed scalable pipelines and ensured the quality and diversity of training data in a mission-critical AI alignment context. • Generated synthetic text data in batches of 1,000–3,000 to train large language models • Managed and performed fine-tuning of open source 8B parameter models using Unsloth • Developed and administered rigorous 20+ question evaluations for model behavior toward animals and digital minds • Implemented additional evaluation and attack methodologies, adapting to new research agendas as needed.

2024 - 2025

Education

U

UC Berkeley

Bachelor of Science, Electrical Engineering and Computer Science

Bachelor of Science
2020 - 2024

Work History

C

Compassion In Machine Learning

Technical Staff – AI Alignment Researcher

Berkeley
2024 - Present