For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
J

Jessie Morales

AI/ML Engineer and Data Labeling & Annotation Specialist.

USA flag
Round Rock, Usa
$20.00/hrExpertAws SagemakerAppenClickworker

Key Skills

Software

AWS SageMakerAWS SageMaker
AppenAppen
ClickworkerClickworker

Top Subject Matter

Machine Learning
Computer Vision & Image Recognition
Natural Language Processing (NLP) / LLM

Top Data Types

ImageImage
TextText
DocumentDocument

Top Task Types

Classification
Bounding Box
Segmentation
Polygon
Entity Ner Classification

Freelancer Overview

Principal Software Engineer in Contract Review, Compliance, and Legal Research. Brings 27+ years of professional experience across complex professional workflows, research, and quality-focused execution. Education includes Master of Science, University of Texas at Dallas (2014) and Bachelor of Science, Texas A&M University (1997).

ExpertEnglish

Labeling Experience

Computer Vision Dataset Curation & Quality Assurance

ImageClassification
Project Scope: Supported a Neural Architecture Search (NAS) research initiative focused on optimizing CNN architectures for edge vision applications. The project involved curating and annotating subsets of the ImageNet dataset to train reinforcement learning agents for architecture discovery. Specific Data Labeling Tasks Performed: Performed multi-class image classification labeling across 1,000+ categories with strict adherence to ImageNet ontology standards Created and validated bounding box annotations for object localization tasks used in model evaluation Conducted quality assurance audits on existing labeled datasets, identifying and correcting approximately 200+ mislabeled samples that were impacting model convergence Developed Python scripts to automate consistency checks across label sets, flagging edge cases and ambiguous annotations for human review Project Size: Managed annotation workflows for approximately 50,000 images across training, validation, and test splits Collaborated with a team of 3 researchers to align labeling guidelines with model performance goals Quality Measures Adhered To: Maintained inter-annotator agreement (IAA) standards with 95%+ consistency across reviewers Implemented multi-pass review process: each sample was reviewed by at least two annotators, with discrepancies escalated for senior review Used randomized spot-checking (10% of batches) to verify label accuracy before integration into training pipelines Documented labeling guidelines and edge case definitions to ensure consistency across sessions This experience demonstrates a strong understanding of how annotation precision directly influences model performance, combining hands-on labeling expertise with a technical background in AI/ML development.

Project Scope: Supported a Neural Architecture Search (NAS) research initiative focused on optimizing CNN architectures for edge vision applications. The project involved curating and annotating subsets of the ImageNet dataset to train reinforcement learning agents for architecture discovery. Specific Data Labeling Tasks Performed: Performed multi-class image classification labeling across 1,000+ categories with strict adherence to ImageNet ontology standards Created and validated bounding box annotations for object localization tasks used in model evaluation Conducted quality assurance audits on existing labeled datasets, identifying and correcting approximately 200+ mislabeled samples that were impacting model convergence Developed Python scripts to automate consistency checks across label sets, flagging edge cases and ambiguous annotations for human review Project Size: Managed annotation workflows for approximately 50,000 images across training, validation, and test splits Collaborated with a team of 3 researchers to align labeling guidelines with model performance goals Quality Measures Adhered To: Maintained inter-annotator agreement (IAA) standards with 95%+ consistency across reviewers Implemented multi-pass review process: each sample was reviewed by at least two annotators, with discrepancies escalated for senior review Used randomized spot-checking (10% of batches) to verify label accuracy before integration into training pipelines Documented labeling guidelines and edge case definitions to ensure consistency across sessions This experience demonstrates a strong understanding of how annotation precision directly influences model performance, combining hands-on labeling expertise with a technical background in AI/ML development.

2022 - 2025

Education

U

University of Texas at Dallas

Master of Science, Computer Science

Master of Science
2012 - 2014
T

Texas A&M University

Bachelor of Science, Computer Engineering

Bachelor of Science
1993 - 1997

Work History

D

Dell Technologies

Principal Software Engineer

Round Rock
2018 - Present
R

Rackspace Technology

Senior Systems Architect

San Antonio
2008 - 2017