AI Model Evaluation & Prompting (Academic Focus)
Within my academic focus on AI Model Evaluation & Prompting, I developed, tested, and refined complex prompts for large language models. I created feedback loops to enhance reasoning consistency and documented model limitations. My work contributed to improved model outputs through supervised fine-tuning tasks.• Designed prompts to test and train LLM capabilities • Systematically recorded model failures on multi-step logic • Provided structured feedback for iterative model improvement • Contributed to model evaluation projects as part of AI Engineering coursework