Red Teaming Workshop Participant
Participated in a Red Teaming & AI Safety Workshop focused on assessing the robustness and ethical alignment of artificial intelligence systems. Evaluated AI behaviors in various scenarios to identify vulnerabilities and unsafe outputs. Contributed to improving the reliability and safety of large language models through targeted testing and feedback. • Conducted adversarial scenario testing on AI models • Assessed outputs for robustness and ethical compliance • Collaborated in feedback sessions to improve AI safety • Developed a critical understanding of AI safety methodologies