google ads
Search

Everything you need to know about Computed Tomography (CT) & CT Scanning

Deep Learning: Deep Learning and the Pancreas Imaging Pearls - Educational Tools | CT Scanning | CT Imaging | CT Scan Protocols - CTisus
Imaging Pearls ❯ Deep Learning ❯ Deep Learning and the Pancreas

-- OR --

  • “Accurate and robust segmentation of abdominal organs on CT is essential for many clinical applications such as computer-aided diagnosis and computer-aided surgery. But this task is challenging due to the weak boundaries of organs, the complexity of the background, and the variable sizes of different organs. To address these challenges, we introduce a novel framework for multi-organ segmentation of abdominal regions by using organ-attention networks with reverse connections (OAN-RCs) which are applied to 2D views, of the 3D CT volume, and output estimates which are combined by statistical fusion exploiting structural similarity. More specifically, OAN is a two-stage deep convolutional network, where deep net- work features from the first stage are combined with the original image, in a second stage, to reduce the complex background and enhance the discriminative information for the target organs.”
    Abdominal multi-organ segmentation with organ-attention networks and statistical fusion
    Wang Y, Zhou Y, Shen W, Park S, Fishman EK, Yuille AL.
    Med Image Anal. 2019 Jul;55:88-102. doi: 10.1016/j.media.2019.04.005. Epub 2019 Apr 18.
  • "First, many abdominal organs have weak boundaries between spatially adjacent structures on CT, e.g. between the head of the pancreas and the duodenum. In addition, the entire CT volume includes a large variety of different complex structures. Morpho- logical and topological complexity includes anatomically connected structures such as the gastrointestinal (GI) track (stomach, duodenum, small bowel and colon) and vascular structures. The correct anatomical borders between connected structures may not be always visible in CT, especially in sectional images (i.e., 2D slices), and may be indicated only by subtle texture and shape change, which causes uncertainty even for human experts. This makes it hard for deep networks to distinguish the target organs from the complex background.”
    Abdominal multi-organ segmentation with organ-attention networks and statistical fusion
    Wang Y, Zhou Y, Shen W, Park S, Fishman EK, Yuille AL.
    Med Image Anal. 2019 Jul;55:88-102. doi: 10.1016/j.media.2019.04.005. Epub 2019 Apr 18.
  • “In general, 3D deep networks face far greater complex challenges than 2D deep networks. Both approaches rely heavily on graphics processing units (GPUs) but these GPUs have limited memory size which makes it difficult when dealing with full 3D CT volumes compared to 2D CT slices (which require much less memory). In addition, 3D deep networks typically require many more parameters than 2D deep networks and hence require much more training data, unless they are re- stricted to patches.”
    Abdominal multi-organ segmentation with organ-attention networks and statistical fusion
    Wang Y, Zhou Y, Shen W, Park S, Fishman EK, Yuille AL.
    Med Image Anal. 2019 Jul;55:88-102. doi: 10.1016/j.media.2019.04.005. Epub 2019 Apr 18.

  • Abdominal multi-organ segmentation with organ-attention networks and statistical fusion
    Wang Y, Zhou Y, Shen W, Park S, Fishman EK, Yuille AL.
    Med Image Anal. 2019 Jul;55:88-102. doi: 10.1016/j.media.2019.04.005. Epub 2019 Apr 18.
  • “In this paper, we proposed a novel framework for multi- organ segmentation using OAN-RCs with statistical fusion exploit- ing structural similarity. Our two-stage organ-attention network reduces uncertainties at weak boundaries, focuses attention on or- gan regions with simple context, and adjusts FCN error by training the combination of original images and OAMs. Reverse connections deliver abstract level semantic information to lower layers so that hidden layers can be assisted to contain more semantic information and give good results even for small organs.”
    Abdominal multi-organ segmentation with organ-attention networks and statistical fusion
    Wang Y, Zhou Y, Shen W, Park S, Fishman EK, Yuille AL.
    Med Image Anal. 2019 Jul;55:88-102. doi: 10.1016/j.media.2019.04.005. Epub 2019 Apr 18.

  • Abdominal multi-organ segmentation with organ-attention networks and statistical fusion
    Wang Y, Zhou Y, Shen W, Park S, Fishman EK, Yuille AL.
    Med Image Anal. 2019 Jul;55:88-102. doi: 10.1016/j.media.2019.04.005. Epub 2019 Apr 18.

  • Abdominal multi-organ segmentation with organ-attention networks and statistical fusion
    Wang Y, Zhou Y, Shen W, Park S, Fishman EK, Yuille AL.
    Med Image Anal. 2019 Jul;55:88-102. doi: 10.1016/j.media.2019.04.005. Epub 2019 Apr 18.
  • “In addition to traditional methods, cinematic rendering (CR) as a novel 3D rendering technique can be used to generate photorealistic with more accurate information regarding the anatomical details. CR can assist clinicians to visualize precisely the extent of tumor vascular invasion, which might be critical for surgical planning; however, the feasibility of this method and other novel techniques in routine clinical practice is yet to be studied.”
    Pitfalls in the MDCT of pancreatic cancer: strategies for minimizing errors
    Arya Haj‐Mirzaian · Satomi Kawamoto · Atif Zaheer · Ralph H. Hruban · Elliot K. Fishman · Linda C. Chu
    Abdominal Radiology 2020 (in press)
  • Purpose: The purpose of this study was to report procedures developed to annotate abdominal computed tomography (CT) images from subjects without pancreatic disease that will be used as the input for deep convolutional neural networks (DNN) for development of deep learning algorithms for automatic recognition of a normal pancreas.
    Results: A total of 1150 dual-phase CT datasets from 575 subjects were annotated. There were 229 men and 346 women (mean age: 45 ± 12 years; range: 18—79 years). The mean intra- observer intra-subject dual-phase CT volume difference of all annotated structures was 4.27 mL (7.65%). The deep network prediction for multi-organ segmentation showed high fidelity with 89.4% and 1.29 mm in terms of mean Dice similarity coefficients and mean surface distances, respectively.
    Annotated normal CT data of the abdomen for deep learning: Challenges and strategies for implementation
    S. Park, L.C. Chu, E.K. Fishman, A.L. Yuille, B. Vogelstein,, K.W. Kinzler et al
    Diagn Interv Imaging. 2020 Jan;101(1):35-44.
  • “Conclusions: A reliable data collection/annotation process for abdominal structures was devel- oped. This process can be used to generate large datasets appropriate for deep learning.”
    Annotated normal CT data of the abdomen for deep learning: Challenges and strategies for implementation
    S. Park, L.C. Chu, E.K. Fishman, A.L. Yuille, B. Vogelstein,, K.W. Kinzler et al
    Diagn Interv Imaging. 2020 Jan;101(1):35-44.
  • Annotated normal CT data of the abdomen for deep learning: Challenges and strategies for implementation S. Park, L.C. Chu, E.K. Fishman, A.L. Yuille, B. Vogelstein,, K.W. Kinzler et al Diagn Interv Imaging. 2020 Jan;101(1):35-44.
  • Annotated normal CT data of the abdomen for deep learning: Challenges and strategies for implementation
    S. Park, L.C. Chu, E.K. Fishman, A.L. Yuille, B. Vogelstein,, K.W. Kinzler et al
    Diagn Interv Imaging. 2020 Jan;101(1):35-44.

  • “In conclusion, we developed a reliable and unique data collection and annotation process for abdominal structures using volumetric CT. The collected data can be used to train the deep learning network for automated recognition of normal abdominal organs. The success of this effort was dependent on a multidisciplinary team including radiologists, computer scientists, oncologists, and pathologists that have worked closely together. Pathologists confirmed that the pancreas in all subjects were normal without pancreatic neoplasms or other pathology. Oncologists provided expert guidance in experimental deign and data analysis.”
    Annotated normal CT data of the abdomen for deep learning: Challenges and strategies for implementation
    S. Park, L.C. Chu, E.K. Fishman, A.L. Yuille, B. Vogelstein,, K.W. Kinzler et al
    Diagn Interv Imaging. 2020 Jan;101(1):35-44.

  • Assessing Radiology Research on Artificial Intelligence:
    A Brief Guide for Authors, Reviewers, and Readers—From the Radiology Editorial Board
    David A.Bluemke et al.
    Radiology 2019; (in press) https://doi.org/10.1148/radiol.2019192515

  • Application of Deep Learning to Pancreatic Cancer Detection: Lessons Learned From Our Initial Experience.
    Chu LC, Park S, Kawamoto S, Wang Y, Zhou Y, Shen W, Zhu Z, Xia Y, Xie L, Liu F, Yu Q, Fouladi DF, Shayesteh S, Zinreich E, Graves JS, Horton KM, Yuille AL, Hruban RH, Kinzler KW, Vogelstein B, Fishman EK.
    J Am Coll Radiol. 2019 Sep;16(9 Pt B):1338-1342
  • “There is a common perception that one can simply provide any number of unprocessed cases to the computer, and AI can then easily perform the discovery or classification task. This approach is referred to as unsupervised learning, in which the deep-learning algorithm is presented with unlabeled data and learns to group the data by similarities or differences. Although this approach is plausible, complex image analysis, such as the detection of pancreatic cancer, may require supervised learning to achieve acceptable results.”
    Application of Deep Learning to Pancreatic Cancer Detection: Lessons Learned From Our Initial Experience.
    Chu LC, Park S, Kawamoto S, et al
    J Am Coll Radiol. 2019 Sep;16(9 Pt B):1338-1342
  • In supervised learning, the algorithm is provided with labeled data, referred to as ground truth, which is used as feedback to improve the algorithm during each iteration. The degree of data labeling can range from a per case level of normal versus abnormal to more detailed labeling in which the boundaries of each region of interest are drawn on the image on every image slice; this boundary drawing is referred to as “segmentation.” Because we have chosen to tackle a difficult AI application, we decided that supervised learning with high- quality input data would yield the best chance of success.
    Application of Deep Learning to Pancreatic Cancer Detection: Lessons Learned From Our Initial Experience.
    Chu LC, Park S, Kawamoto S, et al
    J Am Coll Radiol. 2019 Sep;16(9 Pt B):1338-1342

  • Application of Deep Learning to Pancreatic Cancer Detection: Lessons Learned From Our Initial Experience.
    Chu LC, Park S, Kawamoto S, et al
    J Am Coll Radiol. 2019 Sep;16(9 Pt B):1338-1342

  • Application of Deep Learning to Pancreatic Cancer Detection: Lessons Learned From Our Initial Experience.
    Chu LC, Park S, Kawamoto S, et al
    J Am Coll Radiol. 2019 Sep;16(9 Pt B):1338-1342
  • “Our initial decision to train the deep network to recognize all major abdominal organs instead of focusing on the pancreas proved to be a wise investment of time and resources. As we reviewed the false positives, the deep network occasionally predicted the duodenum or jejunum as an exophytic tumor. This was especially problematic in thin patients with poor fat planes. As we trained the deep network to recognize and segment the major abdominal organs, we were able to use this algorithm to prune out false- positive predictions that overlapped with other organs.”
    Application of Deep Learning to Pancreatic Cancer Detection: Lessons Learned From Our Initial Experience.
    Chu LC, Park S, Kawamoto S, et al
    J Am Coll Radiol. 2019 Sep;16(9 Pt B):1338-1342
  • “In the future, we envision that that AI system for automatic PDAC detection will be seamlessly integrated into the radiology workflow as a “second reader,” similar to how computer-aided diagnosis operates in mammographic screening. The AI system will directly receive the CT data sets from the PACS, automatically segment the abdominal organs, and annotate any suspicious pancreatic pathology. These annotated cases will be sent back to the PACS for the radiologist to review. The “second reader” can improve diagnostic confidence and has the potential to identify subtle cases that can be missed by a busy radiologist. By increasing the sensitivity and accuracy of PDAC detection, AI- integrated workflow has the potential to significantly improve patient outcomes. As radiologists, we should not sit on the sidelines. Instead, we should actively engage the AI revolution, hoping to enhance our efficiency and reduce our errors, eventually improving patient outcomes.”
    Application of Deep Learning to Pancreatic Cancer Detection: Lessons Learned From Our Initial Experience.
    Chu LC, Park S, Kawamoto S, et al
    J Am Coll Radiol. 2019 Sep;16(9 Pt B):1338-1342
  • “In the future, we envision that that AI system for automatic PDAC detection will be seamlessly integrated into the radiology workflow as a “second reader,” similar to how computer-aided diagnosis operates in mammographic screening. The AI system will directly receive the CT data sets from the PACS, automatically segment the abdominal organs, and annotate any suspicious pancreatic pathology. These annotated cases will be sent back to the PACS for the radiologist to review. The “second reader” can improve diagnostic confidence and has the potential to identify subtle cases that can be missed by a busy radiologist.”
    Application of Deep Learning to Pancreatic Cancer Detection: Lessons Learned From Our Initial Experience.
    Chu LC, Park S, Kawamoto S, et al
    J Am Coll Radiol. 2019 Sep;16(9 Pt B):1338-1342
  • “We aim at segmenting a wide variety of organs, including tiny targets (e.g., adrenal gland) and neoplasms (e.g., pancreatic cyst), from abdominal CT scans. This is a challenging task in two aspects. First, some organs (e.g., the pancreas), are highly variable in both anatomy and geometry, and thus very difficult to depict. Second, the neoplasms often vary a lot in its size, shape, as well as its location within the organ. Third, the targets (organs and neoplasms) can be considerably small compared to the human body, and so standard deep networks for segmentation are often less sensitive to these targets and thus predict less accurately especially around their boundaries.”
    Recurrent Saliency Transformation Network for Tiny Target Segmentation in Abdominal CT Scans
    Lingxi Xie, Qihang Yu, Yan Wang, Yuyin Zhou, Elliot K. Fishman, and Alan L. Yuille
    IEEE Trans Med Imaging. 2019 Jul 23. doi: 10.1109/TMI.2019.2930679
  • In this paper, we present an end-to-end framework named Recurrent Saliency Transformation Network (RSTN) for seg- menting tiny and/or variable targets. RSTN is a coarse-to-fine approach, which uses prediction from the first (coarse) stage to shrink the input region for the second (fine) stage. A saliency transformation module is inserted between these two stages, so that (i) the coarse-scaled segmentation mask can be transferred as spatial weights and applied to the fine stage; and (ii) the gradients can be back-propagated from the loss layer to the entire network, so that the two stages are optimized in a joint manner. In the testing stage, we perform segmentation iteratively to improve accuracy.
    Recurrent Saliency Transformation Network for Tiny Target Segmentation in Abdominal CT Scans
    Lingxi Xie, Qihang Yu, Yan Wang, Yuyin Zhou, Elliot K. Fishman, and Alan L. Yuille
    IEEE Trans Med Imaging. 2019 Jul 23. doi: 10.1109/TMI.2019.2930679
  • “In this extended journal paper, we allow a gradual optimization to improve the stability of RSTN, and introduce a hierarchical version named H-RSTN to segment tiny and variable neoplasms such as pancreatic cysts. Experiments are performed on several CT datasets, including a public pancreas segmentation dataset, our own multi-organ dataset, and a cystic pancreas dataset. In all these cases, RSTN outperforms the baseline (a stage-wise coarse-to-fine approach) significantly. Confirmed by the radiologists in our team, these promising segmentation results can help early diagnosis of pancreatic cancer.”
    Recurrent Saliency Transformation Network for Tiny Target Segmentation in Abdominal CT Scans
    Lingxi Xie, Qihang Yu, Yan Wang, Yuyin Zhou, Elliot K. Fishman, and Alan L. Yuille
    IEEE Trans Med Imaging. 2019 Jul 23. doi: 10.1109/TMI.2019.2930679
  • “Motivated by the above, we propose a Recurrent Saliency Transformation Network (RSTN) for segmenting very small targets. The chief innovation lies in the mechanism to relate the coarse and fine stages with a saliency transformation module, which repeatedly transforms the segmentation probability map as spatial weights, from the previous iterations to the current iteration. In the training process, the differentiability of this module makes it possible to optimize the coarse-scaled and fine-scaled networks in a joint manner, so that the overall mod- el gets improved after being aware of a global optimization goal.”
    Recurrent Saliency Transformation Network for Tiny Target Segmentation in Abdominal CT Scans
    Lingxi Xie, Qihang Yu, Yan Wang, Yuyin Zhou, Elliot K. Fishman, and Alan L. Yuille
    IEEE Trans Med Imaging. 2019 Jul 23. doi: 10.1109/TMI.2019.2930679

  • Recurrent Saliency Transformation Network for Tiny Target Segmentation in Abdominal CT Scans
    Lingxi Xie, Qihang Yu, Yan Wang, Yuyin Zhou, Elliot K. Fishman, and Alan L. Yuille
    IEEE Trans Med Imaging. 2019 Jul 23. doi: 10.1109/TMI.2019.2930679

  • Recurrent Saliency Transformation Network for Tiny Target Segmentation in Abdominal CT Scans
    Lingxi Xie, Qihang Yu, Yan Wang, Yuyin Zhou, Elliot K. Fishman, and Alan L. Yuille
    IEEE Trans Med Imaging. 2019 Jul 23. doi: 10.1109/TMI.2019.2930679

  • Recurrent Saliency Transformation Network for Tiny Target Segmentation in Abdominal CT Scans
    Lingxi Xie, Qihang Yu, Yan Wang, Yuyin Zhou, Elliot K. Fishman, and Alan L. Yuille
    IEEE Trans Med Imaging. 2019 Jul 23. doi: 10.1109/TMI.2019.2930679
  • “We present the Recurrent Saliency Transformation Network, which enjoys three advantages. (i) Benefited by a (recurrent) global energy function, it is easier to generalize our models from training data to testing data. (ii) With joint optimization over two networks, both of them get improved individually. (iii) By incorporating multi-stage visual cues, more accurate segmentation results are obtained.”
    Recurrent Saliency Transformation Network for Tiny Target Segmentation in Abdominal CT Scans
    Lingxi Xie, Qihang Yu, Yan Wang, Yuyin Zhou, Elliot K. Fishman, and Alan L. Yuille
    IEEE Trans Med Imaging. 2019 Jul 23. doi: 10.1109/TMI.2019.2930679
  • ”We aim at segmenting a wide variety of organs, including tiny targets (e.g., adrenal gland) and neoplasms (e.g., pancreatic cyst), from abdominal CT scans. This is a challenging task in two aspects. First, some organs (e.g., the pancreas), are highly variable in both anatomy and geometry, and thus very difficult to depict. Second, the neoplasms often vary a lot in its size, shape, as well as its location within the organ. Third, the targets (organs and neoplasms) can be considerably small compared to the human body, and so standard deep networks for segmentation are often less sensitive to these targets and thus predict less accurately especially around their boundaries.”
    Recurrent Saliency Transformation Network for Tiny Target Segmentation in Abdominal CT Scans
    Lingxi Xi, Qihang Yu, Yan Wang, Yuyin Zhou, Elliot K. Fishman, and Alan L. Yuille
    IEEE TRANSACTIONS ON MEDICAL IMAGING (in press)
  • “In conclusion, our study provided preliminary evidence that textural features derived from CT images were useful in differential diagnosis of pancreatic mucinous cystadenomas and serous cystadenomas, which may provide a non-invasive approach to determine whether surgery is needed in clinical practice. However, multicentre studies with larger sample size are needed to confirm these results.”
    Discrimination of Pancreatic Serous Cystadenomas From Mucinous Cystadenomas With CT Textural Features: Based on Machine Learning
    Yang J et al.
    Front. Oncol., 12 June 2019 /doi.org/10.3389/fonc.2019.00494
  • Results: Only 31 of 102 serous cystic neoplasm cases in this study were recognized correctly by clinicians before the surgery. Twenty-two features were selected from the radiomics system after 100 bootstrapping repetitions of the least absolute shrinkage selection operator regression. The diagnostic scheme performed accurately and robustly, showing the area under the receiver operating characteristic curve 1⁄4 0.767, sensitivity 1⁄4 0.686, and specificity 1⁄4 0.709. In the independent validation cohort, we acquired similar results with receiver operating characteristic curve 1⁄4 0.837, sensitivity 1⁄4 0.667, and specificity 1⁄4 0.818.
    Conclusion: The proposed radiomics-based computer-aided diagnosis scheme could increase preoperative diagnostic accuracy and assist clinicians in making accurate management decisions.
    Computer-Aided Diagnosis of Pancreas Serous Cystic Neoplasms: A Radiomics Method on Preoperative MDCT Images
    Ran Wei et al.
    Technology in Cancer Research & Treatment
    Volume 18: 1-9; 2019
  • “A total of 17 intensity and texture features were selected, showing difference between SCNs and non-SCNs. Typically, the intensity T-range, wavelet intensity T-median, and wavelet neighborhood gray-tone difference matrix (NGTDM) busyness were the most distinguishable.”
    Computer-Aided Diagnosis of Pancreas Serous Cystic Neoplasms: A Radiomics Method on Preoperative MDCT Images
    Ran Wei et al.
    Technology in Cancer Research & Treatment
    Volume 18: 1-9; 2019
  • “In our retrospective study of 260 patients with PCN, we were surprised to find that the overall preoperative diagnostic accuracy by clinicians was 37.3% (97 of 260), and only 30.4% (31 of 102) of SCN cases were correctly diagnosed. This meant that more than two-thirds of patients with SCN suffered unnecessary pancreatic resection.”
    Computer-Aided Diagnosis of Pancreas Serous Cystic Neoplasms: A Radiomics Method on Preoperative MDCT Images
    Ran Wei et al.
    Technology in Cancer Research & Treatment
    Volume 18: 1-9; 2019
  • “Furthermore, radiomics high-throughput features containing intensity features, texture features, and their wavelet decomposition forms fully utilized image information and obtained more image details that were hard to discover with the naked human eyes.”
    Computer-Aided Diagnosis of Pancreas Serous Cystic Neoplasms: A Radiomics Method on Preoperative MDCT Images
    Ran Wei et al.
    Technology in Cancer Research & Treatment
    Volume 18: 1-9; 2019
  • “In conclusion, our study proposed a radiomics-based CAD scheme and stressed the role of radiomics analysis as a novel noninvasive method for improving the preoperative diagnostic accuracy of SCNs. In all, 409 quantitative features were auto- matically extracted, and a feature subset containing the 22 most statistically significant features was selected after 100 boot- strapping repetitions. Our proposed method improved the diag- nostic accuracy and performed well in all metrics, with AUC of 0.767 in the cross-validation cohort and 0.837 in the independent validation cohort. This demonstrated that our CAD scheme could provide a powerful reference for the diagnosis of clinicians to reduce misjudgment and avoid overtreatment.”
    Computer-Aided Diagnosis of Pancreas Serous Cystic Neoplasms: A Radiomics Method on Preoperative MDCT Images
    Ran Wei et al.
    Technology in Cancer Research & Treatment
    Volume 18: 1-9; 2019
  • “In conclusion, our study proposed a radiomics-based CAD scheme and stressed the role of radiomics analysis as a novel noninvasive method for improving the preoperative diagnostic accuracy of SCNs. In all, 409 quantitative features were auto- matically extracted, and a feature subset containing the 22 most statistically significant features was selected after 100 boot- strapping repetitions. Our proposed method improved the diag- nostic accuracy and performed well in all metrics, with AUC of 0.767 in the cross-validation cohort and 0.837 in the independent validation cohort. This demonstrated that our CAD scheme could provide a powerful reference for the diagnosis of clinicians to reduce misjudgment and avoid overtreatment.”
    Computer-Aided Diagnosis of Pancreas Serous Cystic Neoplasms: A Radiomics Method on Preoperative MDCT Images
    Ran Wei et al.
    Technology in Cancer Research & Treatment
    Volume 18: 1-9; 2019
  • “In this paper, we adopt 3D CNNs to segment the pancreas in CT images. Although deep neural networks have been proven to be very effective on many 2D vision tasks, it is still challenging to apply them to 3D applications due to the limited amount of annotated 3D data and limited computational resources. We propose a novel 3D-based coarse- to-fine framework for volumetric pancreas segmentation to tackle these challenges. The proposed 3D-based framework outperforms the 2D counterpart to a large margin since it can leverage the rich spatial information along all three axes.”


    A 3D Coarse-to-Fine Framework for Automatic Pancreas Segmentation 
Zhuotun Zhu, Yingda Xia, Wei Shen, Elliot K. Fishman, Alan L. Yuille
arXiv:1712.00201v1 [cs.CV] 1 Dec 2017 

  • “In this work, we proposed a novel 3D network called “ResDSN” integrated with a coarse-to-fine framework to simultaneously achieve high segmentation accuracy and low time cost. The backbone network “ResDSN” is carefully designed to only have long residual connections for efficient inference. To our best knowledge, we are the first to segment the challenging pancreas using 3D networks which leverage the rich spatial information to achieve the state-of- the-art.”

    
A 3D Coarse-to-Fine Framework for Automatic Pancreas Segmentation 
Zhuotun Zhu, Yingda Xia, Wei Shen, Elliot K. Fishman, Alan L. Yuille
arXiv:1712.00201v1 [cs.CV] 1 Dec 2017 

  • “To address these issues, we propose a concise and effective framework based on 3D deep networks for pancreas segmentation, which can simultaneously achieve high seg- mentation accuracy and low time cost. Our framework is formulated in a coarse-to-fine manner. In the training stage, we first train a 3D FCN from the sub-volumes sampled from an entire CT volume. We call this ResDSN Coarse model, which aims to obtain the rough location of the target pancreas from the whole CT volume by making full use of the overall 3D context. Then, we train another 3D FCN from the sub-volumes sampled only from the ground truth bound- ing boxes of the target pancreas. We call this the ResDSN Fine model, which can refine the segmentation based on the coarse result.”


    A 3D Coarse-to-Fine Framework for Automatic Pancreas Segmentation 
Zhuotun Zhu, Yingda Xia, Wei Shen, Elliot K. Fishman, Alan L. Yuille
arXiv:1712.00201v1 [cs.CV] 1 Dec 2017 

  • “This work is motivated by the difficulty of small organ segmentation. As the target is often small, it is required to 
focus on a local input region, but sometimes the network is confused due to the lack of contextual information. We present the Recurrent Saliency Transformation Network, which enjoys three advantages. (i) Benefited by a (recurrent) global energy function, it is easier to generalize our models from training data to testing data. (ii) With joint optimization over two networks, both of them get improved individually. (iii) By incorporating multi-stage visual cues, more accurate segmentation results are obtained. As the fine stage is less likely to be confused by the lack of contexts, we also observe better convergence during iterations.”


    Recurrent Saliency Transformation Network: Incorporating Multi-Stage Visual Cues for Small Organ Segmentation 
Qihang Yu, Lingxi Xie, Yan Wang, Yuyin Zhou, Elliot K. Fishman, Alan L. Yuille
arXiv:1709.04518v3 [cs.CV] 18 Nov 2017
  • “This paper presents a Recurrent Saliency Transforma- tion Network. The key innovation is a saliency transfor- mation module, which repeatedly converts the segmentation probability map from the previous iteration as spatial weights and applies these weights to the current iteration. This brings us two-fold benefits. In training, it allows joint optimization over the deep networks dealing with different input scales. In testing, it propagates multi-stage visual information throughout iterations to improve segmentation accuracy.”


    Recurrent Saliency Transformation Network: Incorporating Multi-Stage Visual Cues for Small Organ Segmentation 
Qihang Yu, Lingxi Xie, Yan Wang, Yuyin Zhou, Elliot K. Fishman, Alan L. Yuille
arXiv:1709.04518v3 [cs.CV] 18 Nov 2017
  • “Automatic segmentation of an organ and its cystic region is a prerequisite of computer-aided diagnosis. In this paper, we focus on pancreatic cyst segmentation in abdominal CT scan. This task is important and very useful in clinical practice yet challenging due to the low contrast in boundary, the variability in location, shape and the different stages of the pancreatic cancer. Inspired by the high relevance between the location of a pancreas and its cystic region, we introduce extra deep supervision into the segmentation network, so that cyst segmentation can be improved with the help of relatively easier pancreas segmentation.”


    Deep Supervision for Pancreatic Cyst Segmentation in Abdominal CT Scans 
Yuyin Zhou, Lingxi Xie, Elliot K. Fishman, and Alan L. Yuille 
(in) Medical Image Computing and Computer Assisted Intervention − MICCAI 2017
page 222-231
  • “This paper presents the first system for pancreatic cyst segmentation which can work without human assistance on the testing stage. Motivated by the high relevance of a cystic pancreas and a pancreatic cyst, we formulate pancreas segmentation as an explicit variable in the formulation, and introduce deep supervision to assist the network training process. The joint optimization can be factorized into two stages, making our approach very easy to implement. We collect a dataset with 131 pathological cases. Based on a coarse-to-fine segmentation algorithm, our approach produces reasonable cyst segmentation results. It is worth emphasizing that our approach does not require any extra human annotations on the testing stage, which is especially practical in assisting common patients in cheap and periodic clinical applications.”

    
Deep Supervision for Pancreatic Cyst Segmentation in Abdominal CT Scans 
Yuyin Zhou, Lingxi Xie, Elliot K. Fishman, and Alan L. Yuille 
(in) Medical Image Computing and Computer Assisted Intervention − MICCAI 2017
page 222-231
  • “The pancreas is a highly deformable organ that has a shape and location that is greatly influenced by the presence of adjacent struc- tures. This makes automated image analysis of the pancreas extremely challenging. A number of different approaches have been taken to automated pancreas analysis, in- cluding the use of anatomic atlases, the loca- tion of the splenic and portal veins, and state- of-the-art computer science methods such as deep learning.”

    Progress in Fully Automated Abdominal CT Interpretation
Summers RM
AJR 2016; 207:67–79
  • “A recent advance in computer science is the refinement of neural networks, a type of machine learning classifier used to make decisions from data. This refine- ment, known generically as deep learn- ing but more specifically as convolutional neural networks, has shown dramatic improvements in automated intelligence applications. Initially drawing attention for impressive improvements in speech recognition and natural image interpretation, deep learning is now being applied to medical images, as described already in the sections on the pancreas and colitis.” 


    Progress in Fully Automated Abdominal CT Interpretation
Summers RM
AJR 2016; 207:67–79
© 1999-2020 Elliot K. Fishman, MD, FACR. All rights reserved.