AI Model Governance & Evaluation Specialist
Served as an AI Model Governance & Evaluation Specialist conducting rigorous RLHF for large language models to optimize output safety and compliance. Designed human feedback tasks and evaluation protocols specific to generative AI coding outputs. Delivered comprehensive assessment reports to influence model improvements within enterprise governance frameworks. • Reduced hallucination rates for AI-generated code by approximately 18%. • Reviewed and rated LLM model outputs for Python and SQL code generation. • Ensured model alignment with compliance standards for regulated industries. • Collaborated remotely with model developers through continuous feedback cycles.