For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
Keith Lamb

Keith Lamb

AI Model Trainer & Prompt Engineer | NLP & RAG Optimization Specialist

USA flagMansfield, TX, Usa
$20.00/hrEntry LevelArgillaDatasaurDoccano

Key Skills

Software

ArgillaArgilla
DatasaurDatasaur
DoccanoDoccano
Label StudioLabel Studio
LightTagLightTag
TagtogTagtog
Internal/Proprietary Tooling

Top Subject Matter

No subject matter listed

Top Data Types

Computer Code ProgrammingComputer Code Programming
DocumentDocument
TextText

Top Task Types

Entity Ner Classification
Evaluation Rating
Question Answering
Text Generation
Text Summarization

Freelancer Overview

Detail-Oriented AI Data Annotator specializing in NLP, Code, and Document Annotation. I help clients turn raw data into high-quality training datasets with accuracy and efficiency. I have hands-on experience in text classification, NER tagging, sentiment analysis, summarization, and QA annotation, combined with a strong programming background for code annotation tasks. I'm proficient with leading labeling tools like Label Studio, Doccano, Argilla, and LightTag, ensuring a smooth integration into your project’s workflow. Reliability and quality are my top priorities – I double-check my work and adhere strictly to guidelines to deliver consistent, error-free labels. As an entry-level professional, I bring fresh energy and a growth mindset: I quickly learn new domains and stay responsive to feedback to meet project needs. Clients can expect clear communication, on-time delivery, and a dedication to getting the details right on every task. Let’s collaborate to build the clean, well-labeled data your AI project needs to succeed.

Entry LevelEnglishSpanish

Labeling Experience

Code Extensions

Internal Proprietary ToolingComputer Code ProgrammingClassificationEvaluation Rating
In this project, I evaluated and annotated code outputs generated by two AI “personal assistant” models. - Scope: Rated the correctness of tool calls (e.g., browsing, search), verified parameter usage, and assessed each code snippet’s functionality. - Tasks: Labeled ~500 code segments for accuracy, style, and adherence to internal guidelines, then provided feedback on improvements. - Quality Measures: Followed a detailed rubric (covering instruction following, truthfulness, and harmlessness), maintained a 95%+ quality threshold via regular spot checks and QA reviews. - Purpose: Results helped refine the AI models’ coding capabilities and improve overall response quality for future development by use of Delphi Technique.

In this project, I evaluated and annotated code outputs generated by two AI “personal assistant” models. - Scope: Rated the correctness of tool calls (e.g., browsing, search), verified parameter usage, and assessed each code snippet’s functionality. - Tasks: Labeled ~500 code segments for accuracy, style, and adherence to internal guidelines, then provided feedback on improvements. - Quality Measures: Followed a detailed rubric (covering instruction following, truthfulness, and harmlessness), maintained a 95%+ quality threshold via regular spot checks and QA reviews. - Purpose: Results helped refine the AI models’ coding capabilities and improve overall response quality for future development by use of Delphi Technique.

2024

Starfish System Instruction Function Calling (SIFC)

Internal Proprietary ToolingComputer Code ProgrammingEvaluation RatingFunction Calling
For this project, I helped refine “system instructions” and verify JSON-based function calls for an AI assistant. - Scope: Analyzed a detailed doc (100+ pages) outlining strict guidelines on how to format, test, and rewrite instructions. - Tasks: Reviewed ~400 instruction blocks to ensure they matched the “Starfish SIFC” standards, corrected any parameter mismatches or hallucinated inputs, and enforced formatting rules (<function_call> … </function_call>). - Quality Measures: Maintained a high standard of compliance (above 98%) with daily audits of conversation flows and error logs. - Outcome: Our rewrites improved model consistency, reducing user confusion and ensuring the system's final outputs strictly followed the documented constraints.

For this project, I helped refine “system instructions” and verify JSON-based function calls for an AI assistant. - Scope: Analyzed a detailed doc (100+ pages) outlining strict guidelines on how to format, test, and rewrite instructions. - Tasks: Reviewed ~400 instruction blocks to ensure they matched the “Starfish SIFC” standards, corrected any parameter mismatches or hallucinated inputs, and enforced formatting rules (<function_call> … </function_call>). - Quality Measures: Maintained a high standard of compliance (above 98%) with daily audits of conversation flows and error logs. - Outcome: Our rewrites improved model consistency, reducing user confusion and ensuring the system's final outputs strictly followed the documented constraints.

2024 - 2024

Education

P

Per Scholas (Sponsored by TEKsystems)

Certificate in Full Stack Java Development, Full Stack Java Development

Certificate in Full Stack Java Development
2024 - 2025
B

Bloom Institute of Technology

Full Stack Web Development Graduate, Full Stack Web Development

Full Stack Web Development Graduate
2022 - 2024

Work History

P

PanPalz

Frontend Developer

Mansfield
2024 - Present
O

Outlier

AI Model Trainer & NLP Prompt Engineer

Mansfield
2024 - Present