For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
B

Brandon Voege

Software Engineer, AI Trainer, and Data Annotator

USA flagIndianapolis, Usa
$26.00/hrExpertLabel StudioProdigyDoccano

Key Skills

Software

Label StudioLabel Studio
ProdigyProdigy
DoccanoDoccano
AWS SageMakerAWS SageMaker
CVATCVAT

Top Subject Matter

Technology & AI Development – Software Engineering, Machine Learning Training Data, Code Annotation
Finance & FinTech – Risk Analysis, Fraud Detection, Transaction Data Classification
E-commerce & Digital Platforms – Product Categorization, Customer Support Data, Sentiment & Intent Annotation

Top Data Types

Computer Code ProgrammingComputer Code Programming
TextText
3D Sensor

Top Task Types

Entity Ner Classification
Segmentation
Text Summarization
Computer Programming Coding
Text Generation
Data Collection
Prompt Response Writing SFT
Red Teaming
RLHF
Polygon
Bounding Box
Point Key Point
Object Detection
Question Answering
Evaluation Rating
Cuboid

Freelancer Overview

I have a strong background working with structured and unstructured data within software engineering and AI-support environments, where data accuracy, consistency, and annotation quality are critical. In my roles as a Software Engineer at Twilio and Backend Developer at One Beyond Ltd, I regularly worked with large datasets used for analytics systems, automation pipelines, and machine learning–ready data structures. My experience includes preparing, cleaning, validating, and organizing datasets to ensure they meet strict quality standards for downstream applications such as analytics dashboards and automated decision systems. I have also contributed to data labeling and classification tasks where datasets needed to be tagged, categorized, and validated to support AI model training and testing. Beyond development work, I have hands-on experience reviewing datasets, identifying anomalies, ensuring annotation consistency, and maintaining clear documentation for labeling guidelines. My strong attention to detail, combined with technical proficiency in Python, SQL, and data processing workflows, allows me to efficiently perform high-volume labeling tasks while maintaining accuracy and quality. My background in software engineering and cybersecurity also gives me a structured and analytical approach to data validation, making me well-suited for AI training data preparation, annotation, and quality assurance workflows.

ExpertEnglishSpanish

Labeling Experience

AI Training Data Annotation for Customer Interaction Analytics

TextText Generation
This project involved preparing and labeling large volumes of customer interaction data to support the training of conversational AI and customer analytics models. I worked with structured and unstructured datasets consisting of chat transcripts, support tickets, and customer feedback records. My primary responsibilities included annotating text data for intent classification, sentiment analysis, and entity recognition to improve the accuracy of natural language processing (NLP) models. The work required consistent tagging of customer intents (e.g., billing inquiry, technical issue, account update), identifying named entities such as product names, transaction references, and service categories, and labeling sentiment indicators to train supervised learning models. The dataset consisted of thousands of interaction records, and strict annotation guidelines were followed to ensure consistency and reliability across the dataset. I also performed dataset validation, quality checks, and annotation reviews to maintain high accuracy and reduce labeling errors.

This project involved preparing and labeling large volumes of customer interaction data to support the training of conversational AI and customer analytics models. I worked with structured and unstructured datasets consisting of chat transcripts, support tickets, and customer feedback records. My primary responsibilities included annotating text data for intent classification, sentiment analysis, and entity recognition to improve the accuracy of natural language processing (NLP) models. The work required consistent tagging of customer intents (e.g., billing inquiry, technical issue, account update), identifying named entities such as product names, transaction references, and service categories, and labeling sentiment indicators to train supervised learning models. The dataset consisted of thousands of interaction records, and strict annotation guidelines were followed to ensure consistency and reliability across the dataset. I also performed dataset validation, quality checks, and annotation reviews to maintain high accuracy and reduce labeling errors.

2022 - 2023

Education

T

Trinity College Dublin

Doctor of Philosophy, Computer Science and Software Engineering

Doctor of Philosophy
2028 - 2028
U

University of Hertfordshire

Master of Science, Software Engineering

Master of Science
2019 - 2019

Work History

T

Twilio

Software Engineer

Indianapolis
2023 - Present
O

One Beyond

Back End Developer

Farnborough
2018 - 2021