Research Assistant — LLM-based research assistant
This role involved fine-tuning large language models (LLMs) on a substantial dataset of over 14,000 research papers and reviews. A section-wise scoring system was built to generate structured evaluations and improve the quality of automated review support. Reflection-based and Retrieval-Augmented Generation (RAG) loops were applied to further reduce factual inconsistencies and enhance review assessments. • Fine-tuned LLMs for targeted evaluation of academic papers. • Designed and implemented an evaluation system for specific research paper sections. • Applied RAG and reflection techniques to improve review accuracy. • Deployed an automated tool adopted by 100+ researchers for research evaluation.