GA-UNet: UNet-based framework for segmentation of 2D and 3D medical images applicable on heterogeneous datasets
Amrita Kaur, Lakhwinder Kaur & Ashima Singh
Segmentation of biomedical images is the method of semiautomatic and automatic detection of boundaries within 2D and 3D images. The major challenge of medical image segmentation is the high variability of shape, location, size and texture of the medical images. Manual segmentation is a time-consuming and monotonous process; therefore, a fully automated segmentation process is highly desirable. UNet is deployed as one of the most popular and generic architectures for medical image segmentation. This paper proposes and deploys two variants based on UNet architecture, namely 2DGA-UNet and 3DGA-UNet for the segmentation of 2D and 3D medical images, respectively. The first variant increases the impact of the 2DGA-UNet framework performance by applying transfer learning with the UNet architecture. We use simple convolutional neural networks from the VGG family known as VGG 16 as an encoder in the 2DGA-UNet network. The critical concept of 3DGA-UNet is to supplement a comprehensive contracting network by successive layers, where upsampling operators replace pooling operators. Such layers, therefore, improve the resolution of the output and further be trained end-to-end from very few images, outperforming the state-of-the-art methods. The proposed models are evaluated for 2D and 3D medical images on five benchmark datasets including brain tumor segmentation (BRATS 2018 and BRATS 2019), brain lesion segmentation (MICCAI 2008 multiple sclerosis challenge), lung segmentation (NIH tuberculosis chest X-ray dataset, Shenzhen No. 3 Hospital X-ray set, RSNA pneumonia detection challenge), liver segmentation (3D-IRCADb-01 database). The comprehensive results show remarkable performance considering 14 different evaluation parameters for the segmentation of medical images. Besides, the GA-UNet outperforms traditional methods in terms of ACC, i.e., 97.0% and the DSC of 91.8%.
Read Full Article Here: https://doi.org/10.1007/s00521-021-06134-z