For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
Nabsa Solomon

Nabsa Solomon

Nabsasolo

USA flagLouisiana, Usa
$25.00/hrExpertAppenClickworkerData Annotation Tech

Key Skills

Software

AppenAppen
ClickworkerClickworker
Data Annotation TechData Annotation Tech
LionbridgeLionbridge
MindriftMindrift
OneFormaOneForma
TolokaToloka
TelusTelus
Other

Top Subject Matter

LLM Evaluation in English
Image classification
Linguistic annotation and Labeling

Top Data Types

DocumentDocument
ImageImage
TextText

Top Task Types

Emotion Recognition
Evaluation Rating
Object Detection
Prompt Response Writing SFT
Text Generation

Freelancer Overview

I am an expert in data annotation, having contributed to numerous LLM projects for a diverse range of clients. My strong linguistic background and practical experience with various tools set me apart in this field. Additionally, I have worked as an AI trainer for different organizations, leveraging my expertise to enhance AI models. My roles as a prompt generator and AI rewriter further showcase my versatility and depth in the AI domain.

ExpertEnglish

Labeling Experience

Scilla Prompt Evaluation

OtherTextEvaluation RatingPrompt Response Writing SFT
The Scilla project focused on the annotation and labeling of data to support large language models (LLMs). The project's scope included ensuring high-quality, nuanced annotations that align with the requirements of machine learning and AI systems. It was intended to enhance the AI’s understanding and generation capabilities through accurate and contextually relevant data labeling. Quality Measures Adhered To: Guideline Adherence: Annotators followed precise and structured guidelines to ensure consistency and accuracy. Peer Reviews: Tasks underwent cross-review processes for validation and error minimization. KPIs and Metrics: Performance metrics such as accuracy, precision, and recall were used to assess task quality. Feedback Loops: Continuous improvement through iterative feedback and updates to labeling approaches.

The Scilla project focused on the annotation and labeling of data to support large language models (LLMs). The project's scope included ensuring high-quality, nuanced annotations that align with the requirements of machine learning and AI systems. It was intended to enhance the AI’s understanding and generation capabilities through accurate and contextually relevant data labeling. Quality Measures Adhered To: Guideline Adherence: Annotators followed precise and structured guidelines to ensure consistency and accuracy. Peer Reviews: Tasks underwent cross-review processes for validation and error minimization. KPIs and Metrics: Performance metrics such as accuracy, precision, and recall were used to assess task quality. Feedback Loops: Continuous improvement through iterative feedback and updates to labeling approaches.

2024 - 2024

Education

U

University of California

Bachelor's Degree in Linguistics, Linguistic

Bachelor's Degree in Linguistics
2009 - 2013

Work History

No Work History added yet

Nabsa S. hasn’t added any Work History to their OpenTrain profile yet.