AI Model Evaluation and Data Quality Assessor (Project-Based Research Scientist)
Conducted structured evaluation and feedback on human-centred research datasets for use in AI model training environments. Developed and implemented protocols to assess quality, consistency, and interpretive rigor of human behavioural data for structured outputs. Used Python and advanced AI coding assistants for workflow automation, data cleaning, and quality assurance. • Involved in digital and app-based research development integrating user-centred feedback. • Applied high standards of ethics and data quality to annotated datasets. • Supported model evaluation and high-quality evidence synthesis. • Delivered outputs supporting structured AI training and evaluation.