AI Model Tester and Prompt Evaluator
The project involved testing AI models for incorrect or harmful responses and evaluating their behavior through designed prompt edge cases. I was responsible for prompt engineering and systematically documenting results and findings to support AI safety initiatives. This work required keen attention to detail and analytical reasoning to identify vulnerabilities. • Tested AI model outputs for edge-case and safety scenarios. • Designed evaluation prompts to probe model robustness and behavior. • Systematically documented results and findings for future analysis. • Focused on red teaming and risk identification for AI safety.