RLHF Evaluator – Biomedical & Health AI
I contributed to reinforcement learning from human feedback (RLHF) processes by evaluating and improving AI-generated responses in medical and health science contexts. This role required critical assessment of AI system outputs, application of ethical review standards, and provision of actionable, structured feedback. My efforts directly supported safer and more reliable health-domain AI applications. • Conducted accuracy and bias assessments of AI-generated health responses. • Evaluated alignment of AI content with scientific evidence and ethical norms. • Generated high-quality structured feedback to inform model fine-tuning. • Collaborated with AI research teams to refine annotation guidelines and review protocols.