AI Red Teamer and Security Researcher (LLM Red Teaming and Prompt Attack)
Led red teaming and security testing of Large Language Models (LLMs) to identify vulnerabilities arising from model handling of adversarial prompts and malicious input. Authored formal reports on AI/LLM vulnerabilities, contributing to the improvement of models through adversarial interaction and prompt manipulation. Developed specialized prompt injection and jailbreaking sequences as part of independent and bug bounty-driven AI security research. • Conducted prompt injection and jailbreaking experiments targeting live LLM APIs. • Evaluated LLM responses against OWASP LLM and Agentic AI Top 10 frameworks. • Documented security findings and model behavior for AI model improvement. • Delivered feedback on model weaknesses to LLM maintainers and platforms.