LLM Prompt Engineering and Output Evaluation
Collaborated with large language models (LLMs) to optimize model outputs and reduce hallucinations. Designed and tested prompts to improve response accuracy and quality for various AI-powered applications. Regularly performed fact-checking, response evaluation, and structured reasoning for AI outputs. • Developed prompt engineering workflows to enhance text generation consistency • Conducted output refinement and correction of AI-generated content • Evaluated and rated LLM responses against quality and factuality standards • Engaged in intent checking and error analysis for iteration and improvement