AI Security Red Teamer and Adversarial Evaluator
I conducted adversarial red teaming and security evaluation of LLM-integrated systems, focusing on assessing AI behavior against prompts designed to expose vulnerabilities. This included evaluating prompt injection, trust boundary bypassing, and simulated data exfiltration in text-processing AI/LLM architectures. I developed and used automated payload testing workflows and structured reporting for effective LLM adversarial assessment. • Performed prompt injection testing on text-based AI/LLM systems • Simulated unsafe tool access and data exfiltration via adversarial prompts • Developed internal research platforms for LLM security evaluation • Delivered evaluation reports with actionable remediation guidance.