Multi-Agent LLM System – Prompt Engineering & Function Calling
Built a multi-agent autonomous trading system with four LLM-driven agents, each with a distinct investment personality and decision pipeline. A core part of the work involved crafting and refining prompts for each agent so their outputs stayed consistent, on-strategy, and tool-call ready. Integrated MCP (Model Context Protocol) to expose live tool backends — accounts, market data, and push servers — meaning every agent response had to be structured well enough to trigger the right function calls reliably. Worked across five different LLM backends (DeepSeek, Gemini, Grok, Ollama, OpenRouter) and observed firsthand how the same prompt produces wildly different outputs across models, which shaped how I wrote and evaluated responses. The work sat right at the intersection of prompt engineering, structured output design, and model evaluation.