AI Failure Pattern Research — Independent Study
Independently researched and documented how large language models (LLMs) fail in real-world use. Created adversarial prompts to systematically identify and reproduce five distinct failure patterns in AI model responses. Compiled findings and presented them as a structured research report. • Explored LLM vulnerabilities such as Commitment Cascade and Context Window Drift • Developed adversarial prompt sets for systematic model evaluation • Categorized failure modes to support more robust AI evaluation protocols • Reported results as a foundation for further prompt engineering or annotation work