Prompt Engineer & AI Trainer (Contract: OpenAI via Scale AI)
As a Prompt Engineer & Trainer at Scale AI (Contract: OpenAI), I optimized large language model outputs by designing, testing, and refining prompts. I evaluated and ranked AI-generated outputs, providing structured feedback through RLHF workflows to improve model accuracy, consistency, and contextual alignment. My work included identifying errors, biases, and edge cases to enhance training data quality and model performance. • Developed structured evaluation prompts and decision approaches for indeterminate outputs. • Applied human reasoning standards to refine response selection criteria. • Provided detailed feedback on LLM outputs for continuous model improvement. • Documented findings to support dataset upgrades and AI training protocols.