AI Software Developer – LLM Assistant Training and Evaluation
Designed prompts and evaluated outputs to improve the accuracy and reliability of LLM-based coding assistants. Worked with various large language model providers to extend AI assistant capabilities for code generation and developer workflow automation. Focused on prompt engineering and systematic evaluation for model refinement and system improvement. • Developed and iterated on prompts for code-related tasks and troubleshooting. • Evaluated AI outputs to provide structured feedback and conduct result analysis. • Integrated LLMs from OpenAI, Anthropic, xAI, and MiniMax for multi-agent workflows. • Monitored agent performance and tuned system parameters for optimal responses.