AI Trainer / Data Annotator / Prompt Engineer
As an AI Trainer and Prompt Engineer, I designed, rated, and evaluated AI outputs for accuracy and domain correctness in STEM and engineering tasks. My role involved structured data annotation, RLHF-style output evaluation, and generating diverse prompt-response datasets specifically for large language models. I consistently applied data annotation best practices, provided structured feedback, and maintained high-quality standards through the use of industry-leading AI labeling platforms. • Designed, tested, and rated hundreds of domain-specific prompts for large language models in STEM domains. • Labeled and annotated model responses using Labelbox-style, Scale AI, and Surge AI methodologies for scientific and engineering data. • Created instruction-response pairs and performed red-teaming to strengthen model accuracy, safety, and logical consistency. • Evaluated and paraphrased model-generated text using tools like Quill Bot, Grammarly, and Hemingway Editor for quality assurance.