Prompt Engineer / LLM Output Evaluator
I designed, tested, and optimized prompts and responses for various generative AI use cases, including summarization, classification, and structured response generation. I reviewed prompt variants, ranked output quality, and rewrote AI outputs to improve clarity, tone, and compliance. My work supported research, evaluation, and deployment of large language model (LLM)-based tools and workflows. • Created and refined prompt templates for instruction-based, informative, and conversational tasks. • Evaluated and ranked multi-step AI-generated outputs based on clarity, completeness, and structure. • Optimized prompts for response consistency and reduced ambiguity in model behavior. • Documented findings and improvement recommendations for engineering and product teams.