For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
Rayson Johnson

Rayson Johnson

LLM Evaluation and Text Generation Specialist in English, French & Spanish

USA flagPennsylvania, Usa
$20.00/hrExpertClickworkerData Annotation TechMindrift

Key Skills

Software

ClickworkerClickworker
Data Annotation TechData Annotation Tech
MindriftMindrift
OneFormaOneForma
RemotasksRemotasks
SamaSama
Scale AIScale AI
SuperAnnotateSuperAnnotate
TelusTelus

Top Subject Matter

LLM Evaluation in French
classifying text data (sentiment analysis, spam detection
Computer Vision

Top Data Types

Computer Code ProgrammingComputer Code Programming
ImageImage
TextText

Top Task Types

Classification
Data Collection
Prompt Response Writing SFT
Text Generation
Text Summarization

Freelancer Overview

With a background in data annotation and analysis, I have developed a broad skill set as an experienced AI trainer that is specifically suited for optimizing machine learning models. During my employment, I have demonstrated proficiency in creating and executing novel training approaches with the goal of improving the precision and effectiveness of the models. My expertise includes working with cross-functional teams to improve algorithms, monitor and assess systems on a regular basis, and offer useful insights for enhancing AI models. I am skilled at performing data annotation and labeling tasks, and I can use this knowledge to improve AI performance. I bring a thorough grasp of the nuances of AI training data, as evidenced by my Master's degree in Computer Science with a focus on Machine Learning and my track record in programming and statistical modeling. Speaking French and Spanish fluently, I also have the cultural flexibility needed to work well in a variety of team environments. My passion for solving problems and attention to detail allow me to lead data analysis and AI training projects to successful outcomes.

ExpertDutchFrenchEnglishSpanish

Labeling Experience

Mindrift

AGI EVALUATE LLM RESPONSE PAIRS

MindriftTextEvaluation Rating
In Mindrift, responses to prompts are evaluated based on three criteria: Harmless, Honest, and Helpful, with an overall rating from 1 to 7. The process involves reading and comparing two responses, considering factual correctness, relevance, and adherence to instructions. Quality measures include avoiding sensitive topics, providing accurate information, and maintaining a suitable tone. The goal is to determine which response is better overall, considering its safety, accuracy, and usefulness.

In Mindrift, responses to prompts are evaluated based on three criteria: Harmless, Honest, and Helpful, with an overall rating from 1 to 7. The process involves reading and comparing two responses, considering factual correctness, relevance, and adherence to instructions. Quality measures include avoiding sensitive topics, providing accurate information, and maintaining a suitable tone. The goal is to determine which response is better overall, considering its safety, accuracy, and usefulness.

2024
Scale AI

Nightingale

Scale AITextPrompt Response Writing SFT
The Nightingale project involves generating and refining text data to train large language models (LLMs). The project includes tasks such as analyzing user prompts, rating pre-generated responses, and improving these responses to ensure they are accurate, clear, and contextually appropriate. The responses must adhere to a detailed quality rubric that evaluates language mechanics, structure, tone, relevance, and completeness. The project emphasizes original, human-generated content and adheres to strict guidelines to ensure high-quality data. Quality measures include thorough proofreading for grammatical errors, maintaining concise and well-formatted responses, and ensuring the responses fully address the prompts without unnecessary verbosity. This meticulous approach ensures the data produced is suitable for effective LLM training.

The Nightingale project involves generating and refining text data to train large language models (LLMs). The project includes tasks such as analyzing user prompts, rating pre-generated responses, and improving these responses to ensure they are accurate, clear, and contextually appropriate. The responses must adhere to a detailed quality rubric that evaluates language mechanics, structure, tone, relevance, and completeness. The project emphasizes original, human-generated content and adheres to strict guidelines to ensure high-quality data. Quality measures include thorough proofreading for grammatical errors, maintaining concise and well-formatted responses, and ensuring the responses fully address the prompts without unnecessary verbosity. This meticulous approach ensures the data produced is suitable for effective LLM training.

2023 - 2024

Education

M

Marshall University

Masters in Computer Science, Computer Science

Masters in Computer Science
2018 - 2020
M

Marshall University

Bachelor's in Computer Science, Computer Science

Bachelor's in Computer Science
2014 - 2018

Work History

M

Mindrift

Data Annotator

Seattle
2024 - Present
R

Remotasks

freelancer(AI Trainer)

Texas
2020 - Present