AI Gaming LLM Project Lead
As AI Gaming LLM Project Lead, I contributed to several critical LLM humanization, evaluation, and safety initiatives in gaming contexts. I established processes for annotator calibration, safety review, and benchmarking of model performance related to human intent and sensitive content. My work included the development and delivery of domain-specific knowledge benchmarks, safety/red-teaming workflows, and quality assurance guidelines. • Delivered structured annotation and review of AI model responses in gaming. • Defined and implemented multi-domain guideline and calibration mechanisms. • Designed and enforced LLM safety response frameworks for sensitive topics. • Led technical delivery and quality workflows for benchmarking projects.