Red Teaming Lead for LLM Agents (Senior Manager, AI Governance & Risk)
I orchestrated cross-functional red teaming exercises for LLM agents, identifying over 50 high-risk prompt injection vulnerabilities prior to deployment. My work centered on evaluating the resilience and safety of large language models before their public release. This process ensured stricter compliance and risk reduction for generative AI systems. •Executed systematic adversarial prompt injections. •Documented AI agent responses and identified non-compliance. •Reported findings to product and compliance teams. •Enhanced the pre-release risk review pipeline.