For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
Wen Gabrielle

Wen Gabrielle

Math/Coding/English/Chinese/Scale AI Contractor

USA flagBellevue, Usa
$30.00/hrIntermediateScale AI

Key Skills

Software

Scale AIScale AI

Top Subject Matter

No subject matter listed

Top Data Types

Computer Code ProgrammingComputer Code Programming
DocumentDocument
ImageImage

Top Task Types

Computer Programming Coding
Fine Tuning
Prompt Response Writing SFT
RLHF
Translation Localization

Freelancer Overview

I have extensive experience in data labeling and AI training data, having worked as a contractor at Scale AI. In this role, I was involved in large-scale LLM training projects, focusing on enhancing models' reasoning abilities, particularly in mathematics. My responsibilities included code annotation, ranking, math reasoning, rubric writing, reinforcement learning with human feedback (RLHF), and prompt engineering. I played a key role in developing math-related problem-solving tasks, such as teaching models to solve linear algebra and discrete math questions, ensuring high accuracy and consistency in model outputs. Additionally, I participated in Kaggle competitions like "Mining Misconceptions in Mathematics," where I applied data processing and machine learning techniques to improve AI performance. My background in both computer science and early education, combined with hands-on experience in LLM fine-tuning, gives me a unique perspective on data quality and model behavior. This blend of technical expertise and educational insight sets me apart in AI training data projects, enabling me to design tasks that not only meet technical requirements but also support robust model learning.

IntermediateEnglishChinese Mandarin

Labeling Experience

Scale AI

Math - Reasoning

Scale AITextRLHFFine Tuning
One of the math projects I worked on focused on designing challenging prompts to test and improve the reasoning capabilities of large language models. In this project, I crafted complex math problems, particularly in areas like linear algebra and discrete mathematics, aimed at pushing the model beyond basic computations to engage in multi-step problem-solving. The goal was to evaluate how well the model could handle intricate reasoning tasks and identify gaps in its logical flow. After assessing the model's performance, I developed detailed step-by-step reasoning guides to help the model improve its problem-solving approach. This involved breaking down complex problems into manageable sub-steps, clarifying mathematical concepts, and reinforcing logical connections between each step. By iteratively refining prompts and feedback, I contributed to enhancing the model’s ability to reason systematically, improving both accuracy and the clarity of its mathematical explanations.

One of the math projects I worked on focused on designing challenging prompts to test and improve the reasoning capabilities of large language models. In this project, I crafted complex math problems, particularly in areas like linear algebra and discrete mathematics, aimed at pushing the model beyond basic computations to engage in multi-step problem-solving. The goal was to evaluate how well the model could handle intricate reasoning tasks and identify gaps in its logical flow. After assessing the model's performance, I developed detailed step-by-step reasoning guides to help the model improve its problem-solving approach. This involved breaking down complex problems into manageable sub-steps, clarifying mathematical concepts, and reinforcing logical connections between each step. By iteratively refining prompts and feedback, I contributed to enhancing the model’s ability to reason systematically, improving both accuracy and the clarity of its mathematical explanations.

2024
Scale AI

Data Science - AI Traning

Scale AIComputer Code ProgrammingClassificationRLHF
One of the coding evaluation projects I worked on involved assessing the performance of AI-generated code. In this project, my responsibilities included evaluating code prompts, reviewing model-generated code for accuracy, efficiency, and adherence to best practices, and providing detailed feedback to improve model outputs. I developed comprehensive guidelines to help the model generate more robust and optimized code, focusing on clarity, logic, and scalability. Additionally, I wrote unit tests to verify the correctness and functionality of the model-generated code. This required a strong understanding of various programming languages and problem-solving techniques to ensure the code met both functional and performance requirements. Through this project, I refined my skills in code evaluation, debugging, and automated testing, contributing to the continuous improvement of AI coding capabilities.

One of the coding evaluation projects I worked on involved assessing the performance of AI-generated code. In this project, my responsibilities included evaluating code prompts, reviewing model-generated code for accuracy, efficiency, and adherence to best practices, and providing detailed feedback to improve model outputs. I developed comprehensive guidelines to help the model generate more robust and optimized code, focusing on clarity, logic, and scalability. Additionally, I wrote unit tests to verify the correctness and functionality of the model-generated code. This required a strong understanding of various programming languages and problem-solving techniques to ensure the code met both functional and performance requirements. Through this project, I refined my skills in code evaluation, debugging, and automated testing, contributing to the continuous improvement of AI coding capabilities.

2024

Education

No Education added yet

Wen G. hasn’t added any Education History to their OpenTrain profile yet.

Work History

No Work History added yet

Wen G. hasn’t added any Work History to their OpenTrain profile yet.