AI Output Evaluator/Red Teamer
I regularly work with AI models by testing outputs, performing red teaming, and improving model performance. My duties include content moderation, checking AI outputs for accuracy and safety, and fixing any issues I encounter with model responses. I am experienced at assessing the quality of generated content and providing detailed feedback to enhance the models' reliability. • Conducted ongoing red teaming to identify vulnerabilities in text model outputs. • Moderated and evaluated AI-generated text responses to ensure they met content guidelines. • Provided prompt and detailed feedback on problematic outputs to improve accuracy. • Helped optimize AI systems through iterative output checking and suggestions.