Yunxin Liu, Di Yuan, Zhenghua Xu, Yuefu Zhan, Hongwei Zhang, Jun Lu, Thomas Lukasiewicz
Sci Rep . 2025 Mar 10;15(1):8213. doi: 10.1038/s41598-025-92117-2.
Existing deep learning methods have achieved significant success in medical image segmentation. However, this success largely relies on stacking advanced modules and architectures, which has created a path dependency. This path dependency is unsustainable, as it leads to increasingly larger model parameters and higher deployment costs. To break this path dependency, we introduce deep reinforcement learning to enhance segmentation performance. However, current deep reinforcement learning methods face challenges such as high training cost, independent iterative processes, and high uncertainty of segmentation masks. Consequently, we propose a Pixel-level Deep Reinforcement Learning model with pixel-by-pixel Mask Generation (PixelDRL-MG) for more accurate and robust medical image segmentation. PixelDRL-MG adopts a dynamic iterative update policy, directly segmenting the regions of interest without requiring user interaction or coarse segmentation masks. We propose a Pixel-level Asynchronous Advantage Actor-Critic (PA3C) strategy to treat each pixel as an agent whose state (foreground or background) is iteratively updated through direct actions. Our experiments on two commonly used medical image segmentation datasets demonstrate that PixelDRL-MG achieves more superior segmentation performances than the state-of-the-art segmentation baselines (especially in boundaries) using significantly fewer model parameters. We also conducted detailed ablation studies to enhance understanding and facilitate practical application. Additionally, PixelDRL-MG performs well in low-resource settings (i.e., 50-shot or 100-shot), making it an ideal choice for real-world scenarios.