PhD Researcher in NLP (Data Annotation, LLM Evaluation)
As a PhD Researcher at the University of Surrey, I focused on constructing high-quality datasets for NLP tasks using human annotation and GPT-assisted synthetic data generation. I developed evaluation metrics and scoring models to quantify creativity output in language model responses, utilizing multi-round dialogues to assess and improve LLM performance. My work involved the careful design of annotation guidelines and the application of prompt paradigms for robust data collection and assessment. • Constructed NLP datasets using a combination of human annotation and automated data synthesis. • Developed and refined scoring criteria for evaluating LLM-generated responses. • Collaborated in challenging LLMs with multi-round dialogues leveraging subject matter expertise. • Implemented and compared advanced fine-tuning and evaluation frameworks for model optimization.