• A Knowledge-Driven Evidence Fusion Network for pancreatic tumor segmentation in CT images

    Kaiqi Dong, Yan Zhu, Yu Tian, Peijun Hu, Chengkai Wu, Xiang Li, Tianshu Zhou, Xueli Bai, Tingbo Liang, Jingsong Li

    Abstract

    Accurate pancreatic tumor segmentation remains challenging due to complex anatomical structures and diverse tumor appearances. This study presents a Knowledge-Driven Evidence Fusion Segmentation Network (KEFS-Net), a framework that systematically integrates radiological and anatomical knowledge from medical reports with imaging features to enhance segmentation accuracy. KEFS-Net consists of three key components: (1) a knowledge-driven attention network that leverages large language models, discrete information bottleneck, and cross-attention to enhance CT image segmentation performance by capturing informative features from medical reports, (2) an evidence fusion strategy based on Dempster�Shafer theory that optimizes segmentation results by evaluating the consistency between textual knowledge and image predictions, and (3) a masked learning approach that ensures robust performance in clinical scenarios with incomplete tumor descriptions. The framework was evaluated on both the Medical Segmentation Decathlon (MSD) dataset and an external clinical dataset from the First Affiliated Hospital (FAH) of Zhejiang University School of Medicine. Experimental results demonstrate superior performance compared to state-of-the-art methods, achieving Dice of 59.10% and 59.42% respectively for tumor segmentation on the MSD and external dataset. The approach shows particular strength in handling diverse tumor characteristics including size variations, boundary ambiguity, and complex anatomical locations. This knowledge-driven framework represents a significant advancement in leveraging domain knowledge through multi-modal integration for improved pancreatic tumor segmentation. Our code is available at https://github.com/Singlesnail/KEFS-Net.