Automated LLM Red-Teaming Scanner Developer
Developed and executed automated adversarial testing on local and API-based large language models (LLMs) to assess vulnerabilities. Used prompt injection and token smuggling attacks within Python-based testing suites. Generated vulnerability reports and mapped attack success to established AI security benchmarks. • Audited LLM responses for compliance with security guardrails • Conducted prompt-based adversarial evaluations • Compiled findings into structured vulnerability documentation • Implemented real-time attack simulation workflows