Research Gap Extraction System – LLM Output Labeling and Evaluation
Engineered a constrained LLM system to extract and label research gaps directly from source documents with clear justification markers. Applied systematic signal-based evaluation and refusal testing to enhance factual reliability and prevent unsupported outputs. Structured system outputs for downstream utility with explicit explanation labeling. • Implemented justification-based evaluation of LLM-generated research gaps. • Applied filtering and refusal mechanism for reliable labeling. • Focused on factual accuracy in output reliability tasks. • Used local LLM inference and Python for system construction.