AI Prompt Engineer / Model Response Evaluator
Participated in AI alignment workflows involving prompt engineering, model response evaluation, and rubric development. Developed prompt and response pairs for training and fine-tuning AI models to enhance their performance and reliability. Evaluated AI outputs based on predefined rubrics and guidelines to support improvement of AI language models. • Authored detailed instructions and curated prompts for language model training. • Rated and assessed AI-generated responses for quality and accuracy. • Developed and refined rubrics to evaluate model outputs systematically. • Contributed to team discussions to identify areas for prompt optimization.