View adaptive learning for pancreas segmentation
Yan Wang, Jianpeng Zhang, Hengfei Cui, Yanning Zhang, Yong Xia
Pancreatic segmentation is a fundamental step in computer-aided diagnosis of the pancreatic cancer. Although 3D U-Net has been dominantly used for this task, it still suffers from limited ability to represent the 3D context in volumetric data. In this paper, we propose the view adaptive 3D U-Net (VA-3DUNet) method for pancreas segmentation in contrasted-enhanced abdominal computed tomography (CT) volumes. Adopting the location-to-segmentation strategy, we first train a 3D U-Net for pancreas localization, and then train another 3D U-Net in a view adaptive way to segment the pancreas in the volume of interest (VOI) determined in the localization step. Such view adaptive training enables the 3D U-Net to perceive each volumetric data from the axial, coronal, and sagittal views simultaneously and hence improves its ability to represent the 3D context. We evaluated the proposed VA-3DUNet method against four state-of-the-art methods on the NIH pancreas segmentation dataset and achieved an average Dice similarity coefficient of 86.19%, which is higher than that achieved by those competing methods. Our results demonstrate the effectiveness of the view adaptive training and the satisfactory performance of the proposed VA-3DUNet method in pancreas segmentation.
Read Full Article Here: https://doi.org/10.1016/j.bspc.2020.102347