AI Red Teaming and Prompt Testing
Conducted red teaming of AI language models by crafting realistic, user-like prompts that challenged model reasoning and exposed failure modes such as logical errors and instruction-following issues. Evaluated responses and annotated errors according to guidelines to support model improvement