AI Trainer / LLM Quality & Safety Evaluator
As an AI Trainer and LLM Quality & Safety Evaluator, I designed and refined prompts for large language models. I evaluated AI-generated responses for factual accuracy, logical consistency, tone, and safety using rubrics. I conducted adversarial testing and provided structured feedback to improve model reliability. • Developed and iteratively improved prompt sets for LLMs • Evaluated responses for logic, coherence, bias, and safety • Delivered structured, evidence-based feedback on model behavior • Performed adversarial and red teaming tests to expose model vulnerabilities