google ads
Search

Everything you need to know about Computed Tomography (CT) & CT Scanning

Deep Learning: Deep Learning and the Pancreas Imaging Pearls - Educational Tools | CT Scanning | CT Imaging | CT Scan Protocols - CTisus
Imaging Pearls ❯ Deep Learning ❯ Deep Learning and the Pancreas

-- OR --

  • “The multivariable logistic regression model included sex, size, location, shape, cyst characteristic, and cystic wall thickening. The individualized prediction nomogram showed good discrimination in the training sample (AUC 0.89; 95% CI 0.83–0.95) and in the validation sample (AUC 0.81; 95% CI 0.70–0.94). If the threshold probability is between 0.03 and 0.9, and > 0.93 in the prediction model, using the nomogram to predict SCN and MCN is more beneficial than the treat-all- patients as SCN scheme or the treat-all-patients as MCN scheme. The prediction model showed better discrimination than the radiologists’ diagnosis (AUC = 0.68).”
    A nomogram for predicting pancreatic mucinous cystic neoplasm and serous cystic neoplasm  
    Chengwei Shao et al.
    Abdominal Radiology https://doi.org/10.1007/s00261-021-03038-3 
  • All tumors were evaluated for the following characteristics: (1) CT-reported tumor size (i.e., the maximum cross-sectional diameter of the tumor [13]); (2) tumor location: pancreatic head, body, or tail; (3) shape: round or lobulated (lobulation was defined as the presence of rounded contours that could not be described as the borders of the same circle [9]); (4) cyst characteristic: oligocystic or polycystic; (5) cystic wall: thin or thick (thin was defined as < 2 mm while thick was defined as ≥ 2 mm [9]); (6) calcification; (7) enhanced mural nodule; (8) parenchymal atrophy; (9) common bile duct cutoff and dila- tion (> 10 mm); (10) main pancreatic duct (MPD) cutoff and dilation (> 3 mm); (11) pancreatitis identified by stranding of the peripancreatic fat tissue, ill-defined parenchymal contours, and fluid collections in the peripancreatic region; (12) contour abnormality; and (13) number of lesions: 1 or ≥ 2.  
    A nomogram for predicting pancreatic mucinous cystic neoplasm and serous cystic neoplasm  
    Chengwei Shao et al.
    Abdominal Radiology https://doi.org/10.1007/s00261-021-03038-3 
  • “There were several limitations to this study. First, the number of patients was relatively small. Second, this was a single-center, retrospective analysis. In the future, we will expand the number of cases and perform a multi-center validation of the model. Third, the predicted model in this study only focused on SCN and MCN, and did not include other cystic lesions of the pancreas such as IPMN, pseudocyst, and retention cyst. Lastly, we only used CT characteristics to develop the model. We did not combine radiomics features, although artificial intelligence is becoming a hot topic. In the future, we will combine the CT characteristics and radiomics features to develop a more accurate model.”
    A nomogram for predicting pancreatic mucinous cystic neoplasm and serous cystic neoplasm  
    Chengwei Shao et al.
    Abdominal Radiology https://doi.org/10.1007/s00261-021-03038-3 
  • “Lastly, we only used CT characteristics to develop the model. We did not combine radiomics features, although artificial intelligence is becoming a hot topic. In the future, we will combine the CT characteristics and radiomics features to develop a more accurate model.”
    A nomogram for predicting pancreatic mucinous cystic neoplasm and serous cystic neoplasm  
    Chengwei Shao et al.
    Abdominal Radiology https://doi.org/10.1007/s00261-021-03038-3  
  • “Pancreatic cancer (pancreatic ductal adenocarcinoma [PDAC]) is associated with a dire prognosis and a 5-year survival rate of only 10%. This statistic is somewhat misleading given that 52% of the patients will develop metastatic disease, with a resulting 2.9%, 5-year relative survival rate. However, for those patients with localized cancer where the tumor is confined to the primary site, the 5-year relative survival rate is 39.4%. It is estimated that in 2020, there will be 57,600 new cases of PDAC  and an estimated 47,050 will die of this disease.”  
    Artificial Intelligence and Early Detection of Pancreatic Cancer: 2020 Summative Review  
    Barbara Kenner, PhD,* Suresh T. Chari, MD,† David Kelsen, MD, Fishman EK et al.
    Pancreas. 2021 Mar 1;50(3):251-279
  • "Pancreatic ductal adenocarcinoma has the poorest overall survival of all the major cancer types, with a 5-year relative  survival rate that just reached 10%. This is due in part to the latestage at presentation, so that 49.6% of cases of newly diagnosed PDAC present with distant metastases, 29.1% present with re- gional lymph node involvement, and only 10.8% have tumors that are localized solely within the pancreas.”
    Artificial Intelligence and Early Detection of Pancreatic Cancer: 2020 Summative Review  
    Barbara Kenner, PhD,* Suresh T. Chari, MD,† David Kelsen, MD, Fishman EK et al.
    Pancreas. 2021 Mar 1;50(3):251-279

  • Artificial Intelligence and Early Detection of Pancreatic Cancer: 2020 Summative Review  
    Barbara Kenner, PhD,* Suresh T. Chari, MD,† David Kelsen, MD, Fishman EK et al.
    Pancreas. 2021 Mar 1;50(3):251-279
  • "In this context, the big data field provides a conceptual framework for analysis across the full spectrum of disease that may better capture patient subcategories, in particular when considering longitudinal disease development in a lifelong perspective. Here, variation in “healthy” diagnosis-free routes toward disease and later differences in disease comorbidities are currently of high interest. Using health care sector, socioeconomic, and consumer data, the precision medicine field works increasingly toward such a disease spectrum-wide approach. Ideally, this involves data describing healthy individuals, many of whom will later become sick—to have long-range correlations that relate to outcomes available for analysis. This notion extends the traditional disease trajectory concept into healthy life-course periods potentially enabling stratification of patient cohorts by systematically observed differences present before the onset and diagnosis of disease.”
    Artificial Intelligence and Early Detection of Pancreatic Cancer: 2020 Summative Review  
    Barbara Kenner, PhD,* Suresh T. Chari, MD,† David Kelsen, MD, Fishman EK et al.
    Pancreas. 2021 Mar 1;50(3):251-279
  • "Ultimately, it is likely that AI will transform much of the practice of medicine. AI will be used to interpret radiographs, ultrasounds, CT, and MRI, either as an adjunct to the clinician's interpretation or as the standalone reading.88 Health care organizations will use AI systems to extract and analyze electronic health record (EHR) data to better allocate staff and other resources, identify patients at risk for acute decompensation, and prevent medication errors.148 Using sensors on commodity devices such as smartphones, wearables, smart speakers, laptops, and tablets, individuals will be able to share health data during their daily lives and help generate a longitudinal personal health record, with pertinent information incorporated into their EHR. By extracting information from the EHR and incorporating data during an encounter with a patient, clinicians can be provided with a differential diagnosis in real-time with probabilities included.”
    Artificial Intelligence and Early Detection of Pancreatic Cancer: 2020 Summative Review  
    Barbara Kenner, PhD,* Suresh T. Chari, MD,† David Kelsen, MD, Fishman EK et al.
    Pancreas. 2021 Mar 1;50(3):251-279
  • "Because of the “black box” quality of many deep learning algorithms, clinicians and patients may be hesitant to depend on AI-based solutions. This fear is not unfounded. For example, it was discovered that an algorithm evaluating data from images of skin lesions was more likely to classify the lesion as malignant if a ruler was included in the photograph.149 The reticence by clinicians to embrace AI-based medical devices may also be explained by the paucity of peer-reviewed prospective studies assessing the efficacy of these systems.Finally, regulatory assessment of the effectiveness and safety of AI-based products is different from that of traditional medical devices.Regulatory agencies are working to find the best processes for determining whether an AI medical device should be cleared for clinical use.”
    Artificial Intelligence and Early Detection of Pancreatic Cancer: 2020 Summative Review  
    Barbara Kenner, PhD,* Suresh T. Chari, MD,† David Kelsen, MD, Fishman EK et al.
    Pancreas. 2021 Mar 1;50(3):251-279
  • "The ability to reliably detect very early-stage PDAC in asymptomatic patients should result in a major improvement in survival. This hypothesis is based on the observation that the prognosis for PDAC is clearly related to the pathological stage of the tumor at the time of diagnosis. Using the SEER database, Ansari et al reported that 5-year survival for patients with lymph node–negative primary PDAC less than 1-cm cancers is ~60%; with primary tumors of 2 cm or larger even without lymph node metastasis, survival was less than 20%. However, less than 1% of patients are found with primary PDAC less than 1 centimeter in size. Pancreatic ductal adenocarcinoma is diagnosed in the large majority of even stage IA patients because of symptoms, not as a result of an early detection program. The hypothesis that the earlier the stage of a PDAC, the better the outcome, is in concert with data from many other solid tumors, including breast, non–small cell lung, colorectal, prostate, and gastric cancers.”
    Artificial Intelligence and Early Detection of Pancreatic Cancer: 2020 Summative Review  
    Barbara Kenner, PhD,* Suresh T. Chari, MD,† David Kelsen, MD, Fishman EK et al.
    Pancreas. 2021 Mar 1;50(3):251-279
  • "Project Felix is a Lustgarten Foundation initiative led by Elliott Fishman at Johns Hopkins University to develop deep learning tools that can detect pancreatic tumors when they are smaller and with greater reliability than human readers alone. This effort has involved meticulous manual segmentation of thousands of abdominal CT scans to serve as a training and testing cohort, which represents the largest effort in this domain in the world. In collaboration with the computer scientist Alan Yuille. Project Felix has produced at least 17 articles on techniques to automatically detect and characterize lesions within the pancreas (https://www.ctisus.com/responsive/deep-learning/felix.asp).”
    Artificial Intelligence and Early Detection of Pancreatic Cancer: 2020 Summative Review  
    Barbara Kenner, PhD,* Suresh T. Chari, MD,† David Kelsen, MD, Fishman EK et al.
    Pancreas. 2021 Mar 1;50(3):251-279
  • "Eugene Koay from The University of Texas MD Anderson Cancer Center (MDACC) has previously characterized subtypes of PDAC on CT scans, whereby conspicuous (high delta) PDAC tumors are more likely to have aggressive biology, a higher rate of common pathway mutations, and poorer clinical outcomes compared with inconspicuous (low delta) tumors.His group has recently completed an analysis, currently under review, that shows that high-delta tumors demonstrate higher growth rates and shorter initiation times than their low-delta counterparts in the prediagnostic period. Although not strictly an AI initiative, his work serves as a rich foundation for future AI initiatives in this space. Drs Koay and Anirban Maitra at the MDACC are leading the NCI-sponsored EDRN initiative to assemble a prediagnosis pancreatic cancer cohort that could facilitate AI research into screening and early detection.”
    Artificial Intelligence and Early Detection of Pancreatic Cancer: 2020 Summative Review  
    Barbara Kenner, PhD,* Suresh T. Chari, MD,† David Kelsen, MD, Fishman EK et al.
    Pancreas. 2021 Mar 1;50(3):251-279
  • “Pancreatic cancer remains a major health problem, and only less than 20% of patients have resectable disease at the time of initial diagnosis. Systemic chemotherapy is often used in the patients with borderline resectable, locally advanced unresectable disease and metastatic disease. CT is often used to assess for therapeutic response; however, conventional imaging including CT may not correctly reflect treatment response after chemotherapy.”
    Assessment of iodine uptake by pancreatic cancer following chemotherapy using dual-energy CT.  
    Kawamoto S, Fuld MK, Laheru D, Huang P, Fishman EK.  
    Abdom Radiol (NY). 2018;43(2):445-456. 
  • "Dual-energy (DE) CT can acquire datasets at two different photon spectra in a single CT acquisition, and permits separating materials and extract iodine by applying a material decomposition algorithm. Quantitative iodine mapping may have an added value over conventional CT imaging for monitoring the treatment effects in patients with pancreatic cancer and potentially serve as a unique biomarker for treatment response. In this pictorial essay, we will review the technique for iodine quantification of pancreatic cancer by DECT and discuss our observations of iodine quantification at baseline and after systemic chemotherapy with conventional cytotoxic agents.”
    Assessment of iodine uptake by pancreatic cancer following chemotherapy using dual-energy CT.  
    Kawamoto S, Fuld MK, Laheru D, Huang P, Fishman EK.  
    Abdom Radiol (NY). 2018;43(2):445-456. 
  • “The parameters obtained using tumor segmentation software included (1) RECIST diameter (mm), (2) tumor volume (mL), (3) mean CT number of tumor (HU) at simulated weighted-average 120-kVp images, (4) iodine uptake by tumor per volume of tissue (mg/mL), and (5) normalized tumor iodine uptake (tumor iodine uptake normalized to the reference value acquired using region of interest place in the abdominal aorta at the level of the pancreatic tumor, calculated by tumor iodine uptake [mg/dL]/abdominal aortic uptake [mg/dL]).”
  • “In conclusion, iodine uptake by pancreatic adenocarcinoma using DECT may add supplemental information for assessment of treatment response, although tumor iodine uptake by pancreatic adenocarcinoma is small, and it may be difficult to apply to each case. Normalized tumor iodine uptake might be more sensitive than iodine concentration to measure treatment response. More data are necessary to confirm these observations.”
    Assessment of iodine uptake by pancreatic cancer following chemotherapy using dual-energy CT.  
    Kawamoto S, Fuld MK, Laheru D, Huang P, Fishman EK.  
    Abdom Radiol (NY). 2018;43(2):445-456.
  • Purpose: Evaluate utility of dual energy CT iodine material density images to identify preoperatively nodal positivity in pancreatic cancer patients who underwent neoadjuvant therapy.
    Conclusion: The dual energy based minimum normalized iodine value of all nodes in the surgical field on preoperative studies has modest utility in differentiating N0 from N1/2, and generally outperformed conventional features for identifying nodal metastases.
    CT features predictive of nodal positivity at surgery in pancreatic cancer patients following neoadjuvant therapy in the setting of dual energy CT.  
    Le O, Javadi S, Bhosale PR et al.  
    Abdom Radiol (NY). 2021 Jan 20. doi: 10.1007/s00261-020-02917-5. Epub ahead of print. PMID: 33471129.
  • Background: The diagnostic performance of CT for pancreatic cancer is interpreter-dependent, and approximately 40% of tumours smaller than 2 cm evade detection. Convolutional neural networks (CNNs) have shown promise in image analysis, but the networks’ potential for pancreatic cancer detection and diagnosis is unclear. We aimed to investigate whether CNN could distinguish individuals with and without pancreatic cancer on CT, compared with radiologist interpretation. Interpretation CNN could accurately distinguish pancreatic cancer on CT, with acceptable generalisability to images of patients from various races and ethnicities. CNN could supplement radiologist interpretation.
    Deep learning to distinguish pancreatic cancer tissue from non-cancerous pancreatic tissue: a retrospective study with cross-racial external validation
    Kao-Lang Liu et al.
    Lancet Digital Health 2020; 2: e303–13
  • “Findings Between Jan 1, 2006, and Dec 31, 2018, we obtained CT images. In local test set 1, CNN-based analysis had a sensitivity of 0·973, specificity of 1·000, and accuracy of 0·986 (area under the curve [AUC] 0·997 (95% CI 0·992–1·000). In local test set 2, CNN-based analysis had a sensitivity of 0·990, specificity of 0·989, and accuracy of 0·989 (AUC 0·999 [0·998–1·000]). In the US test set, CNN-based analysis had a sensitivity of 0·790, specificity of 0·976, and accuracy of 0·832 (AUC 0·920 [0·891–0·948)]. CNN-based analysis achieved higher sensitivity than radiologists did (0·983 vs 0·929, difference 0·054 [95% CI 0·011–0·098]; p=0·014) in the two local test sets combined. CNN missed three (1·7%) of 176 pancreatic cancers (1·1–1·2 cm). Radiologists missed 12 (7%) of 168 pancreatic cancers (1·0–3·3 cm), of which 11 (92%) were correctly classified using CNN. The sensitivity of CNN for tumours smaller than 2 cm was 92·1% in the local test sets and 63·1% in the US test set.”
    Deep learning to distinguish pancreatic cancer tissue from non-cancerous pancreatic tissue: a retrospective study with cross-racial external validation
    Kao-Lang Liu et al.
    Lancet Digital Health 2020; 2: e303–13
  • Findings: CNN-based analysis achieved higher sensitivity than radiologists did (0·983 vs 0·929, difference 0·054 [95% CI 0·011–0·098]; p=0·014) in the two local test sets combined. CNN missed three (1·7%) of 176 pancreatic cancers (1·1–1·2 cm). Radiologists missed 12 (7%) of 168 pancreatic cancers (1·0–3·3 cm), of which 11 (92%) were correctly classified using CNN. The sensitivity of CNN for tumours smaller than 2 cm was 92·1% in the local test sets and 63·1% in the US test set.
    Deep learning to distinguish pancreatic cancer tissue from non-cancerous pancreatic tissue: a retrospective study with cross-racial external validation
    Kao-Lang Liu et al.
    Lancet Digital Health 2020; 2: e303–13
  • “CNN can accurately differentiate pancreatic cancer from non-cancerous pancreas, and with improvements might accommodate variations in patient race and ethnicity and imaging parameters that are inevitable in real-world clinical practice. CNN holds promise for developing computer-aided detection and diagnosis tools for pancreatic cancer to supplement radiologist interpretation.”
    Deep learning to distinguish pancreatic cancer tissue from non-cancerous pancreatic tissue: a retrospective study with cross-racial external validation
    Kao-Lang Liu et al.
    Lancet Digital Health 2020; 2: e303–13
  • “In conclusion, this study provided a proof of concept that CNN can accurately distinguish pancreatic cancer on portal venous CT images. The CNN model holds promise as a compute r­aided diagnostic tool to assist radiologists and clinicians in diagnosing pancreatic cancer.”
    Deep learning to distinguish pancreatic cancer tissue from non-cancerous pancreatic tissue: a retrospective study with cross-racial external validation
    Kao-Lang Liu et al.
    Lancet Digital Health 2020; 2: e303–13
  • “Interpretation CNN could accurately distinguish pancreatic cancer on CT, with acceptable generalisability to images of patients from various races and ethnicities. CNN could supplement radiologist interpretation.”
    Deep learning to distinguish pancreatic cancer tissue from non-cancerous pancreatic tissue: a retrospective study with cross-racial external validation
    Kao-Lang Liu et al.
    Lancet Digital Health 2020; 2: e303–13
  • “Deep learning is a type of machine learning method in which algorithms are trained to perform tasks by learning patterns from data rather than by explicit programming. Deep neural networks are inspired by biological neural networks and use a matrix of interconnected nodes to mimic the function of a biologic neuron. The basic unit of an artificial neural network is a node. It takes a set of input features, multiplies these features by corresponding weights in the form of mathematical equations, and then passes the output to the next layer of nodes. The deep network architecture uses multiple layers of interconnected nodes to develop a mathematical model that best fits the data. The outputs are compared with the “ground truth,” and errors are used as feedback to adjust the weights in the network to minimize error in subsequent iterations.”
    Pancreatic Cancer Imaging: A New Look at an Old Problem
    Linda C. Chu MD, Seyoun Park, Satomi Kawamoto, Alan L. Yuille , Ralph H. Hruban, Elliot K. Fishman
    Current Problems in Diagnostic Radiology (in press)

  • Automatic detection of pancreatic ductal adenocarcinoma (PDAC) with deep learning. (Left panel) Axial IV contrast- enhanced CT image shows a hypoenhancing mass in the pancreatic body (arrow) with dilated pancreatic duct (arrowhead). (Middle panel) Manual segmentation of the tumor (red), pancreatic duct (green), and background pancreas (blue). (Right panel) Deep network prediction of tumor (red), pancreatic duct (green) and background pancreas (blue).

  • Automatic detection of pancreatic neuroendocrine tumor (PanNET) with deep learning. (Left panel) Axial IV contrast-enhanced CT image shows a subtle hyperenhancing mass within the head of the pancreas (arrow). (Middle panel) Manual segmentation of tumor (pink) and background pancreas (blue). (Right panel) Deep network prediction of tumor (pink) and background pancreas (blue).

  • Automatic detection of intraductal papillary mucinous neoplasm (IPMN) with deep learning. (Left panel) Axial IV contrast-enhanced CT image shows multiple well-circumscribed cystic lesions in the pancreas (arrow). (Middle panel) Manual segmentation of cystic tumors (yellow) and background pancreas (blue). (Right panel) Deep network prediction of cystic tumors (yellow) and background pancreas (blue).

  • A schematic illustrating the radiomics feature extraction and analysis process. Radiomics features can be classified into signal intensity, shape, texture, and filtered features (e.g., wavelets and Laplacian of Gaussian [LoG]). (Left panel) Input of imaging datasets (normal vs. abnormal) with annotation of regions of interest. (Middle panel) Extraction of radiomics features, including histogram of voxel signal intensities, shape features based on surface rendering of region of interest, and filtered features. (Right panel) The raw data are processed through feature selection to identify the most relevant features. These features can be correlated with clinical outcomes in classification tasks.
  • “Radiomics features have also been used to predict PanNET grade, one of the most important prognostic factors in predicting patient survival. Qualitative features such as ill-defined margins, heterogeneous enhancement, low- level enhancement, vascular involvement, and main pancreatic duct dilatation have been reported to be helpful features in predicting higher tumor grade. Radiomics features achieved equivalent or superior performance compared to traditional clinical and imaging features. in most, but not all studies, with higher tumor grades in the majority of these studies, and with worse progression free survival. The addition of radiomics features to traditional CT features may improve the accuracy of PanNET grade prediction.”
    Pancreatic Cancer Imaging: A New Look at an Old Problem
    Linda C. Chu MD, Seyoun Park, Satomi Kawamoto, Alan L. Yuille , Ralph H. Hruban, Elliot K. Fishman
    Current Problems in Diagnostic Radiology (in press)
  • “Radiomics features have also been reported to be predictive of overall survival in patients with unresectable or locally advanced PDAC. Not surprisingly, the presence of metastatic disease at presentation was the most predictive of poor overall survival. factors. Radiomics features associated with tumor heterogeneity were also found to be poor prognostic factors. There is speculation that tumor hypoattenuation may reflect areas of hypoxic necrosis, which may suggest more aggressive underlying tumor biology as well as impaired response to chemotherapy and radiation therapy. Low attenuation may also be evidence of extensive venous invasion by the cancer.”
    Pancreatic Cancer Imaging: A New Look at an Old Problem
    Linda C. Chu MD, Seyoun Park, Satomi Kawamoto, Alan L. Yuille , Ralph H. Hruban, Elliot K. Fishman
    Current Problems in Diagnostic Radiology (in press)
  • "While VR uses a simple ray cast method to generate 3D images, CR uses Monte Carlo path tracing that takes direct and indirect illumination into account. With CR, each pixel is formed by thousands of rays passing through the volumetric dataset and includes effects of light rays from scatter and from voxels adjacent to the paths of the rays. CR has the potential to more accurately depict complex anatomy. When applied to pancreatic imaging, CR can be used to accentuate focal textural change and enhance appreciation of internal architecture (e.g., septations, mural nodules) to improve their visualization and assist in tumor classification.”
    Pancreatic Cancer Imaging: A New Look at an Old Problem
    Linda C. Chu MD, Seyoun Park, Satomi Kawamoto, Alan L. Yuille , Ralph H. Hruban, Elliot K. Fishman
    Current Problems in Diagnostic Radiology (in press)
  • “Augmented reality (AR) is another advanced visualization technique that may improve treatment planning as well as intraoperative navigation. AR can superimpose holographic representations of imaging data onto the real-world environment through the use of handheld displays or head-mounted see-through glasses. Preliminary studies on AR applications in pancreatic surgery have shown that these holographic images may be helpful in proper selection of resection margin and in defining the spatial relationship between the tumor and adjacent organs and vasculature. AR surgical navigation may be particularly valuable during laparoscopic or robotic-assisted surgery due to limited visualization and tactile feedback during surgery.”
    Pancreatic Cancer Imaging: A New Look at an Old Problem
    Linda C. Chu MD, Seyoun Park, Satomi Kawamoto, Alan L. Yuille , Ralph H. Hruban, Elliot K. Fishman
    Current Problems in Diagnostic Radiology (in press)
  • “Although radiomics has the potential to provide personalized imaging biomarkers for risk stratification and prognostication, there are currently no standards for image acquisition or feature extraction. Published studies differ regarding image acquisition, segmentation, and the types and numbers of radiomics features that were extracted. Each of these factors can affect the reproducibility of radiomics signatures. Although some of this variability may be mitigated through image compensation methods,86 further work is needed to define the optimal image acquisition and feature extraction protocols. While these preliminary studies appear promising, many of them lack internal and external validation to ensure the generalizability of the results. Several studies also lack head-to-head comparisons between radiomics and expert radiologists to demonstrate the incremental clinical benefit of radiomics as opposed to current standard of care. The potential of advanced visualization techniques in guiding patient management has been explored in small single-center case-series, and these results also require further validation.”
    Pancreatic Cancer Imaging: A New Look at an Old Problem
    Linda C. Chu MD, Seyoun Park, Satomi Kawamoto, Alan L. Yuille , Ralph H. Hruban, Elliot K. Fishman
    Current Problems in Diagnostic Radiology (in press)
  • “Pancreatic ductal adenocarcinoma (PDAC) segmentation is one of the most challenging tumor segmentation tasks, yet critically important for clinical needs. Previous work on PDAC segmentation is limited to the moderate amounts of annotated patient images (n<300) from venous or venous+arterial phase CT scans. Based on a new self-learning framework, we propose to train the PDAC segmentation model using a much larger quantity of patients (n≈1,000), with a mix of annotated and un- annotated venous or multi-phase CT images. Pseudo annotations are generated by combining two teacher models with different PDAC segmentation specialties on unannotated images, and can be further refined by a teaching assistant model that identifies associated vessels around the pancreas.”
    Robust Pancreatic Ductal Adenocarcinoma Segmentation with Multi-Institutional Multi-Phase Partially-Annotated CT Scans
    Ling Zhang et al.
    arXiv: August 2020 (in press)
  • “Fully automated and accurate segmentation of pancreatic ductal adenocarcinoma (PDAC) is one of the most challenging tumor segmentation tasks, in the aspects of complex abdominal structures, large variations in morphology and appearance, low image contrast and fuzzy/uncertain boundary, etc. Previous studies introduce the cascade UNet for segmenting venous phase CT and hyperpairing network for segmenting venous+arterial phases CT and achieving mean Dice scores of 0.52 and 0.64, respectively. By incorporating nnUNet into a new self-learning framework with two teachers and one teaching assistant to segment three-phases of CT scans, our method reaches a Dice coefficient of 0.71, similar to the inter-observer variability between radiologists. This provides promise that a radiologist-level performance for accurate PDAC tumor segmentation in multi-phase CT imaging can be achieved through our computerized method.”
    Robust Pancreatic Ductal Adenocarcinoma Segmentation with Multi-Institutional Multi-Phase Partially-Annotated CT Scans
    Ling Zhang et al.
    arXiv: August 2020 (in press)

  • Robust Pancreatic Ductal Adenocarcinoma Segmentation with Multi-Institutional Multi-Phase Partially-Annotated CT Scans
    Ling Zhang et al.
    arXiv: August 2020 (in press)
  • Background: To identify preoperative computed tomography radiomics texture features which correlate with resection margin status and prognosis in resected pancreatic head adenocarcinoma.
    Methods: Improved prognostication methods utilizing novel non-invasive radiomic techniques may accurately predict resection margin status preoperatively. In an ongoing concerning pancreatic head adenocarcinoma, the venous enhanced CT images of 86 patients who underwent pancreaticoduodenectomy were selected, and the resection margin (>1 mm or ≤1 mm) was identified by pathological examination. Three regions of interests (ROIs) were then taken from superior to inferior facing the superior mesenteric vein and artery. Subsequent Laplacian-Dirichlet based texture analysis methods extracting algorithm of texture features within ROIs were analyzed and assessed in relation to patient prognosis.
    Results: Patients with >1 mm resection margin had an overall improved survival compared to ≤1 mm (P < 0.05). Distance 1 and 2 of Gray level co-occurrence matrix, high Gray-level run emphasis of run-length matrix and average of wavelet transform (all P < 0.05) were correlated with resection margin status (Area under the curve was 0.784, sensitivity was 75% and specicity was 79%). The energy of wavelet transform, the measure of smoothness of histogram and the variance in 2 direction of Gabor transform are independent predictors of overall survival prognosis, independent of resection margin.
    Conclusions: Resection margin status (>1 mm vs ≤1 mm) is a key prognostic factor in pancreatic adenocarcinoma and CT radiomic analysis have the potential to predict resection margin status preoperatively, and the radiomic labels may improve selection neoadjucant therapy.
    Radiomics Signatures of Computed Tomography Imaging for Predicting Resection Margin Status in Pancreatic Head Adenocarcinoma
    Jinheng Liu et al.
    BMC Surgery (in press)
  • Results: Patients with >1 mm resection margin had an overall improved survival compared to ≤1 mm (P < 0.05). Distance 1 and 2 of Gray level co-occurrence matrix, high Gray-level run emphasis of run-length matrix and average lter of wavelet transform (all P < 0.05) were correlated with resection margin status (Area under the curve was 0.784, sensitivity was 75% and specificity was 79%). The energy of wavelet transform, the measure of smoothness of histogram and the variance in 2 direction of Gabor transform are independent predictors of overall survival prognosis, independent of resection margin.
    Conclusions: Resection margin status (>1 mm vs ≤1 mm) is a key prognostic factor in pancreatic adenocarcinoma and CT radiomic analysis have the potential to predict resection margin status preoperatively, and the radiomic labels may improve selection neoadjucant therapy.
    Radiomics Signatures of Computed Tomography Imaging for Predicting Resection Margin Status in Pancreatic Head Adenocarcinoma
    Jinheng Liu et al.
    BMC Surgery (in press)
  • Background: To identify preoperative computed tomography radiomics texture features which correlate with resection margin status and prognosis in resected pancreatic head adenocarcinoma.
    Methods: Improved prognostication methods utilizing novel non-invasive radiomic techniques may accurately predict resection margin status preoperatively. In an ongoing concerning pancreatic head adenocarcinoma, the venous enhanced CT images of 86 patients who underwent pancreaticoduodenectomy were selected, and the resection margin (>1 mm or ≤1 mm) was identified by pathological examination. Three regions of interests (ROIs) were then taken from superior to inferior facing the superior mesenteric vein and artery. Subsequent Laplacian-Dirichlet based texture analysis methods extracting algorithm of texture features within ROIs were analyzed and assessed in relation to patient prognosis.
    Radiomics Signatures of Computed Tomography Imaging for Predicting Resection Margin Status in Pancreatic Head Adenocarcinoma
    Jinheng Liu et al.
    BMC Surgery (in press)
  • “Radiomic texture analysis of pre-operative enhanced CT images can be used for accurate preoperative assessment of resection margins in patients with pancreatic ahead adenocarcinoma providing clinicians alongside patients a more non-invasive means of perioperative prognostication to guide management.”
    Radiomics Signatures of Computed Tomography Imaging for Predicting Resection Margin Status in Pancreatic Head Adenocarcinoma
    Jinheng Liu et al.
    BMC Surgery (in press)

  • Radiomics Signatures of Computed Tomography Imaging for Predicting Resection Margin Status in Pancreatic Head Adenocarcinoma
    Jinheng Liu et al.
    BMC Surgery (in press)

  • Radiomics Signatures of Computed Tomography Imaging for Predicting Resection Margin Status in Pancreatic Head Adenocarcinoma
    Jinheng Liu et al.
    BMC Surgery (in press)
  • “PDAC is the most common pancreatic malig- nancy, accounting for more than 85% of pancreatic tumors. It is typically a disease of elderly patients, with a mean age at presentation of 68 years and a male-to-female ratio of 1.6:1. After colorectal cancer, it is the second most common cancer of the digestive system in the United States, and its incidence is rising sharply.The development of pancreatic cancer is strongly related to smoking, family history, obesity, long-standing diabetes, and chronic pancreatitis. Early stages of PDAC are clinically silent. Abdominal pain is the most frequently reported clinical symptom, even when the tumor is small (<2 cm).”
    Pancreatic Ductal Adenocarcinoma and Its Variants: Pearls and Perils
    Schawkat K et al.
    RadioGraphics 2020; 40:0000–0000
  • "With the development of AI and all its potential wonders in terms of increasing the accuracy of our diagnostic capabilities and potentially improving patient care, we must also be concerned about the potential dark side by bad actors. The sooner organized radiology and organized medicine address these issues with clarity, the more stable and protected the health care system and our patients will be from those intent on creating harm and havoc by abusing AI. The acceleration of data sharing during the current pandemic exposes critical vulnerabilities in data security. It reminds us of the pervasive threat that bad actors can and will exploit any technology for their selfish gains. Doing nothing is not a viable strategy, but acting in a concerted effort will lead us to the protection we need and is important as we push AI development over the next several years.”
    The Potential Dangers of Artificial Intelligence for Radiology and Radiologists
    Linda C. Chu, MD, Anima Anandkumar, PhD, Hoo Chang Shin, PhD, Elliot K. Fishman, MD
    JACR (in press)
  • “Pancreatic cancer continues to be one of the deadliest malignancies and is the third leading cause of cancer-related mortality in the United States. Based on several models, it is projected to become the second leading cause of cancer-related deaths by 2030. Although the overall survival rate for patients diagnosed with pancreatic cancer is less than 10%, survival rates are increasing in those whose cancers are detected at an early stage, when intervention is possible. There are, however, no reli- able biomarkers or imaging technology that can detect early-stage pancreatic cancer or accurately identify precursors that are likely to progress to malignancy.”
    Prediagnostic Image Data, Artificial Intelligence, and Pancreatic Cancer A Tell-Tale Sign to Early Detection
    Matthew R. Young et al.
    Pancreas 2020;49: 882–886
  • "The challenge now is to develop imaging biomarkers and models that can further improve sensitivity for the detection of early-stage PDACs and aggressive neoplasms while mitigating diagnostic uncertainty in evaluation of premalignant abnormalities. Augmented reality, artificial intelligence (AI), and related computa- tional techniques can uncover these subtle patterns, improve image interpretation, and streamline diagnostic workflows.”
    Prediagnostic Image Data, Artificial Intelligence, and Pancreatic Cancer A Tell-Tale Sign to Early Detection
    Matthew R. Young et al.
    Pancreas 2020;49: 882–886
  • "Currently, identification of localized pancreatic cancer is mostly incidental as localized pancreatic cancer is asymptomatic. What is urgently needed are minimally invasive screening strategies with a high clinical sensitivity and specificity to identity early-stage cancer and improve these grim statistics. To this end, it is particularly important to develop tests that have high specificity because a false-positive test may trigger unnecessary invasive procedures, which add their own risk of morbidity and mortality.”
    Prediagnostic Image Data, Artificial Intelligence, and Pancreatic Cancer A Tell-Tale Sign to Early Detection
    Matthew R. Young et al.
    Pancreas 2020;49: 882–886
  • There are many challenges that need to be mitigated in the development of an image repository to enable AI system development. These include the following:
    (1) What are the requirements for defining image annotation? 
    (2) What are the main concerns with depositing patient imaging data?
    (3) What are the definitions of an AI-specific clinical use cases?
    (4) What are the benefits and drawbacks of alternative data sharing in facilitating AI development? 

  • Prediagnostic Image Data, Artificial Intelligence, and Pancreatic Cancer A Tell-Tale Sign to Early Detection
    Matthew R. Young et al.
    Pancreas 2020;49: 882–886

  • Prediagnostic Image Data, Artificial Intelligence, and Pancreatic Cancer A Tell-Tale Sign to Early Detection
    Matthew R. Young et al.
    Pancreas 2020;49: 882–886
  • “The AI-driven diagnostic software has the potential to trans- form early detection of pancreatic cancer by improving accuracy and consistency of interpretation of radiologic imaging scans and related patient data. The development of reproducible AI systems requires access to current, large, diverse, and multisite data sets, which are subject to numerous data sharing limitations. Future efforts are likely to involve alternative data sharing solutions to enable the development of both public and private AI-ready data resources. Early detection of pancreatic cancer represents an attractive AI use case, well matched to benefit from the MTD challenge approach. This approach will significantly expand the use of sensitive data to improve early detection of pancreatic cancer and lay the foundation for the development of federated architectures for real-world medical data in general.”
    Prediagnostic Image Data, Artificial Intelligence, and Pancreatic Cancer A Tell-Tale Sign to Early Detection
    Matthew R. Young et al.
    Pancreas 2020;49: 882–886
  • Purpose: The purpose of this study was to determine whether computed tomography (CT)-based machine learning of radiomics features could help distinguish autoimmune pancreatitis (AIP) from pancreatic ductal adenocarcinoma (PDAC).
    Results: The pancreas was diffusely involved in 37 (37/89; 41.6%) patients with AIP and not diffusely in 52(52/89; 58.4%) patients. Using machine learning, 95.2% (59/62; 95% confidence interval [CI]: 89.8–100%),83.9% (52:67; 95% CI: 74.7–93.0%) and 77.4% (48/62; 95% CI: 67.0–87.8%) of the 62 test patients werecorrectly classified as either having PDAC or AIP with thin-slice venous phase, thin-slice arterial phase, and thick-slice venous phase CT, respectively. Three of the 29 patients with AIP (3/29; 10.3%) were incorrectly classified as having PDAC but all 33 patients with PDAC (33/33; 100%) were correctly classified with thin-slice venous phase with 89.7% sensitivity (26/29; 95% CI: 78.6–100%) and 100% specificity (33/33;95% CI: 93–100%) for the diagnosis of AIP, 95.2% accuracy (59/62; 95% CI: 89.8–100%) and area under the curve of 0.975 (95% CI: 0.936–1.0).
    Conclusions: Radiomic features help differentiate AIP from PDAC with an overall accuracy of 95.2%.
  • Purpose: The purpose of this study was to determine whether computed tomography (CT)-based machine learning of radiomics features could help distinguish autoimmune pancreatitis (AIP) from pancreatic ductal adenocarcinoma (PDAC).
    Conclusions: Radiomic features help differentiate AIP from PDAC with an overall accuracy of 95.2%.
    Differentiating autoimmune pancreatitis from pancreatic ductal adenocarcinoma with CT radiomics features
    S. Park, L.C. Chu, R.H. Hruban, Vogelstein, K.W. Kinzler, A.L. Yuille, Fouladi, S. Shayesteh, S. Ghandili, C.L. Wolfgang, R. Burkhart, J. He, E.K. Fishman, S. Kawamoto
    Diagnostic and Interventional Imaging (in press)
  • •CT radiomics differentiates AIP from PDAC with 89.7% sensitivity and 100% specificity.
    •Thin slice CT radiomics better differentiates AIP from PDAC than thick slice CT radiomics.
    •Venous phase CT radiomics better differentiates AIP from PDAC than arterial phase radiomics.
    Differentiating autoimmune pancreatitis from pancreatic ductal adenocarcinoma with CT radiomics features
    S. Park, L.C. Chu, R.H. Hruban, Vogelstein, K.W. Kinzler, A.L. Yuille, Fouladi, S. Shayesteh, S. Ghandili, C.L. Wolfgang, R. Burkhart, J. He, E.K. Fishman, S. Kawamoto
    Diagnostic and Interventional Imaging (in press)
  • “AIP has clinical and imaging features that overlap with those of pancreatic ductal adenocarcinoma (PDAC) and can pose a significant diagnostic dilemma even for experienced radiologists . The management of these two conditions is markedly different. Patients with AIP are initially treated with oral corticosteroids, while patients with PDAC are treated with a combination of surgical resection and chemotherapy. The most common presentation of AIP is obstructive jaundice and pancreatic enlargement, which mimics that of PDAC and 2–6% of patients undergoing surgical resection for suspected pancreatic cancer are actually diagnosed with AIP upon histopathological analysis. Computed tomography (CT) plays an important role in the evaluation of suspected pancreatic cancer, and is often the initial diagnostic imaging modality. It is of utmost importance to correctly differentiate AIP from PDAC early in the disease process so as to administer the proper treatment and avoid unnecessary pancreatic resections in patients with AIP.”
    Differentiating autoimmune pancreatitis from pancreatic ductal adenocarcinoma with CT radiomics features
    S. Park, L.C. Chu, R.H. Hruban, Vogelstein, K.W. Kinzler, A.L. Yuille, Fouladi, S. Shayesteh, S. Ghandili, C.L. Wolfgang, R. Burkhart, J. He, E.K. Fishman, S. Kawamoto
    Diagnostic and Interventional Imaging (in press)

  • Differentiating autoimmune pancreatitis from pancreatic ductal adenocarcinoma with CT radiomics features
    S. Park, L.C. Chu, R.H. Hruban, Vogelstein, K.W. Kinzler, A.L. Yuille, Fouladi, S. Shayesteh, S. Ghandili, C.L. Wolfgang, R. Burkhart, J. He, E.K. Fishman, S. Kawamoto
    Diagnostic and Interventional Imaging (in press)
  • "In conclusion, radiomics analysis of CT images is reasonably accurate in differentiating AIP from PDAC. Using such features, in combination with clinical and standard radiologic analyses, may improve the accuracy of AID diagnosis and spare patients’ unnecessary surgical procedure.”
    Differentiating autoimmune pancreatitis from pancreatic ductal adenocarcinoma with CT radiomics features
    S. Park, L.C. Chu, R.H. Hruban, Vogelstein, K.W. Kinzler, A.L. Yuille, Fouladi, S. Shayesteh, S. Ghandili, C.L. Wolfgang, R. Burkhart, J. He, E.K. Fishman, S. Kawamoto
    Diagnostic and Interventional Imaging (in press)
  • "Our results showed that by combining radiomics features, AIP could be distinguished from PDAC with a sensitivity of 89.7% and a specificity of 100%, and an overall accuracy of 95.2%. Among 3 patients with focal AIP were falsely classified as PDAC using radiomics features, two patients had focal AIP in the head with a plastic stent in the common bile duct, which can sensitively affect to the quantitative feature computation. In our study, the accuracy was higher than that in a previous study that evaluated CT to differentiate AIP from PDAC based on morphological features. In that study, the mean accuracies for diagnosing AIP and PDAC were 68% and 83%, respectively. In our study, AIP was considered as a diagnosis or differential diagnosis by the radiologists in only in 67% of patients with AIP not already suspected to be AIP at the time of CT examination.”
    Differentiating autoimmune pancreatitis from pancreatic ductal adenocarcinoma with CT radiomics features
    S. Park, L.C. Chu, R.H. Hruban, Vogelstein, K.W. Kinzler, A.L. Yuille, Fouladi, S. Shayesteh, S. Ghandili, C.L. Wolfgang, R. Burkhart, J. He, E.K. Fishman, S. Kawamoto
    Diagnostic and Interventional Imaging (in press)
  • “We found that radiomics features were better at distinguishing AIP from PDAC using venous phase CT images than using arterial phase images. We also performed radiomics analysis on both thin- and thick-slice reconstructions. We found that thin-slice CT based radiomics signature had better diagnostic performance than thick-slice, as reported in pulmonary nodules and lung cancer in prior studies.”
    Differentiating autoimmune pancreatitis from pancreatic ductal adenocarcinoma with CT radiomics features
    S. Park, L.C. Chu, R.H. Hruban, Vogelstein, K.W. Kinzler, A.L. Yuille, Fouladi, S. Shayesteh, S. Ghandili, C.L. Wolfgang, R. Burkhart, J. He, E.K. Fishman, S. Kawamoto
    Diagnostic and Interventional Imaging (in press)
  • Purpose: The purpose of this study is to evaluate diagnostic performance of a commercially available radiomics research prototype vs. an in-house radiomics software in the binary classification of CT images from patients with pancreatic ductal adenocarcinoma (PDAC) vs. healthy controls.
    Conclusion: Commercially available and in-house radiomics software achieve similar diagnostic performance, which may lower the barrier of entry for radiomics research and allow more clinician-scientists to perform radiomics research.
    Diagnostic performance of commercially available vs. in‐house radiomics software in classification of CT images from patients with pancreatic ductal adenocarcinoma vs. healthy controls
    Linda C. Chu · Berkan Solmaz · Seyoun Park · Satomi Kawamoto · Alan L. Yuille · Ralph H. Hruban · Elliot K. Fishman
    Abdominal Radiology https://doi.org/10.1007/s00261-020-02556-w
  • “Results: When 40 radiomics features were used in the random forest classification, in-house software achieved superior sensitivity (1.00) and accuracy (0.992) compared to the commercially available research prototype (sensitivity = 0.950, accuracy = 0.968). When the number of features was reduced to five features, diagnostic performance of the in-house soft- ware decreased to sensitivity (0.950), specificity (0.923), and accuracy (0.936). Diagnostic performance of the commercially available research prototype was unchanged.”
    Diagnostic performance of commercially available vs. in‐house radiomics software in classification of CT images from patients with pancreatic ductal adenocarcinoma vs. healthy controls
    Linda C. Chu · Berkan Solmaz · Seyoun Park · Satomi Kawamoto · Alan L. Yuille · Ralph H. Hruban · Elliot K. Fishman
    Abdominal Radiology https://doi.org/10.1007/s00261-020-02556-w
  • “Radiomics has the potential to generate imaging biomarkers for classification and prognostication. Technical parameters from image acquisition to feature extraction and analysis have the potential to affect radiomics features. The current study used the same CT images with manual segmentation on both a commercially available research prototype and in-house radiomics software to control for any variability at the image acquisition step and compared the diagnostic performance of the two programs. Both programs achieved similar diagnostic performance in the binary classification of CT images from patients with PDAC and healthy control subjects, despite differences in the radiomics fea-tures they employed (854 features in commercial program vs. 478 features in in-house program).”
    Diagnostic performance of commercially available vs. in‐house radiomics software in classification of CT images from patients with pancreatic ductal adenocarcinoma vs. healthy controls
    Linda C. Chu · Berkan Solmaz · Seyoun Park · Satomi Kawamoto · Alan L. Yuille · Ralph H. Hruban · Elliot K. Fishman
    Abdominal Radiology https://doi.org/101007/s00261-020-02556-w
  • "This is reassuring that even though there may be variations in the computed values for radiomics features, the differences do not seem to significantly impact the overall diagnostic performance of the constellation of radiomics features. This is important for the broader implementation of radiomics research. Currently, many radiomics studies have been performed using proprietary in-house software, which requires in-house expertise in computer science, a luxury that only a few academic centers can afford. The results of this study show that commercially available radiomics software may be a viable alternative to in-house computer science expertise, which can lower the barrier of entry for radiomics research and allow clinicians to validate findings of the published studies with their own local datasets.”
    Diagnostic performance of commercially available vs. in‐house radiomics software in classification of CT images from patients with pancreatic ductal adenocarcinoma vs. healthy controls
    Linda C. Chu · Berkan Solmaz · Seyoun Park · Satomi Kawamoto · Alan L. Yuille · Ralph H. Hruban · Elliot K. Fishman
    Abdominal Radiology https://doi.org/101007/s00261-020-02556-w
  • “This study showed that a commercially available radiomics software may be able to achieve similar diagnostic performance as an in-house radiomics software. The results obtained from one radiomics software may be transferrable to another system. Availability of commercial radiom ics software may lower the barrier of entry for radiomics research and allow more researchers to engage in this exciting area of research.”
    Diagnostic performance of commercially available vs. in‐house radiomics software in classification of CT images from patients with pancreatic ductal adenocarcinoma vs. healthy controls
    Linda C. Chu · Berkan Solmaz · Seyoun Park · Satomi Kawamoto · Alan L. Yuille · Ralph H. Hruban · Elliot K. Fishman
    Abdominal Radiology https://doi.org/101007/s00261-020-02556-w
  • “Accurate and robust segmentation of abdominal organs on CT is essential for many clinical applications such as computer-aided diagnosis and computer-aided surgery. But this task is challenging due to the weak boundaries of organs, the complexity of the background, and the variable sizes of different organs. To address these challenges, we introduce a novel framework for multi-organ segmentation of abdominal regions by using organ-attention networks with reverse connections (OAN-RCs) which are applied to 2D views, of the 3D CT volume, and output estimates which are combined by statistical fusion exploiting structural similarity. More specifically, OAN is a two-stage deep convolutional network, where deep net- work features from the first stage are combined with the original image, in a second stage, to reduce the complex background and enhance the discriminative information for the target organs.”
    Abdominal multi-organ segmentation with organ-attention networks and statistical fusion
    Wang Y, Zhou Y, Shen W, Park S, Fishman EK, Yuille AL.
    Med Image Anal. 2019 Jul;55:88-102. doi: 10.1016/j.media.2019.04.005. Epub 2019 Apr 18.
  • "First, many abdominal organs have weak boundaries between spatially adjacent structures on CT, e.g. between the head of the pancreas and the duodenum. In addition, the entire CT volume includes a large variety of different complex structures. Morpho- logical and topological complexity includes anatomically connected structures such as the gastrointestinal (GI) track (stomach, duodenum, small bowel and colon) and vascular structures. The correct anatomical borders between connected structures may not be always visible in CT, especially in sectional images (i.e., 2D slices), and may be indicated only by subtle texture and shape change, which causes uncertainty even for human experts. This makes it hard for deep networks to distinguish the target organs from the complex background.”
    Abdominal multi-organ segmentation with organ-attention networks and statistical fusion
    Wang Y, Zhou Y, Shen W, Park S, Fishman EK, Yuille AL.
    Med Image Anal. 2019 Jul;55:88-102. doi: 10.1016/j.media.2019.04.005. Epub 2019 Apr 18.
  • “In general, 3D deep networks face far greater complex challenges than 2D deep networks. Both approaches rely heavily on graphics processing units (GPUs) but these GPUs have limited memory size which makes it difficult when dealing with full 3D CT volumes compared to 2D CT slices (which require much less memory). In addition, 3D deep networks typically require many more parameters than 2D deep networks and hence require much more training data, unless they are re- stricted to patches.”
    Abdominal multi-organ segmentation with organ-attention networks and statistical fusion
    Wang Y, Zhou Y, Shen W, Park S, Fishman EK, Yuille AL.
    Med Image Anal. 2019 Jul;55:88-102. doi: 10.1016/j.media.2019.04.005. Epub 2019 Apr 18.

  • Abdominal multi-organ segmentation with organ-attention networks and statistical fusion
    Wang Y, Zhou Y, Shen W, Park S, Fishman EK, Yuille AL.
    Med Image Anal. 2019 Jul;55:88-102. doi: 10.1016/j.media.2019.04.005. Epub 2019 Apr 18.
  • “In this paper, we proposed a novel framework for multi- organ segmentation using OAN-RCs with statistical fusion exploit- ing structural similarity. Our two-stage organ-attention network reduces uncertainties at weak boundaries, focuses attention on or- gan regions with simple context, and adjusts FCN error by training the combination of original images and OAMs. Reverse connections deliver abstract level semantic information to lower layers so that hidden layers can be assisted to contain more semantic information and give good results even for small organs.”
    Abdominal multi-organ segmentation with organ-attention networks and statistical fusion
    Wang Y, Zhou Y, Shen W, Park S, Fishman EK, Yuille AL.
    Med Image Anal. 2019 Jul;55:88-102. doi: 10.1016/j.media.2019.04.005. Epub 2019 Apr 18.

  • Abdominal multi-organ segmentation with organ-attention networks and statistical fusion
    Wang Y, Zhou Y, Shen W, Park S, Fishman EK, Yuille AL.
    Med Image Anal. 2019 Jul;55:88-102. doi: 10.1016/j.media.2019.04.005. Epub 2019 Apr 18.

  • Abdominal multi-organ segmentation with organ-attention networks and statistical fusion
    Wang Y, Zhou Y, Shen W, Park S, Fishman EK, Yuille AL.
    Med Image Anal. 2019 Jul;55:88-102. doi: 10.1016/j.media.2019.04.005. Epub 2019 Apr 18.
  • “In addition to traditional methods, cinematic rendering (CR) as a novel 3D rendering technique can be used to generate photorealistic with more accurate information regarding the anatomical details. CR can assist clinicians to visualize precisely the extent of tumor vascular invasion, which might be critical for surgical planning; however, the feasibility of this method and other novel techniques in routine clinical practice is yet to be studied.”
    Pitfalls in the MDCT of pancreatic cancer: strategies for minimizing errors
    Arya Haj‐Mirzaian · Satomi Kawamoto · Atif Zaheer · Ralph H. Hruban · Elliot K. Fishman · Linda C. Chu
    Abdominal Radiology 2020 (in press)
  • Purpose: The purpose of this study was to report procedures developed to annotate abdominal computed tomography (CT) images from subjects without pancreatic disease that will be used as the input for deep convolutional neural networks (DNN) for development of deep learning algorithms for automatic recognition of a normal pancreas.
    Results: A total of 1150 dual-phase CT datasets from 575 subjects were annotated. There were 229 men and 346 women (mean age: 45 ± 12 years; range: 18—79 years). The mean intra- observer intra-subject dual-phase CT volume difference of all annotated structures was 4.27 mL (7.65%). The deep network prediction for multi-organ segmentation showed high fidelity with 89.4% and 1.29 mm in terms of mean Dice similarity coefficients and mean surface distances, respectively.
    Annotated normal CT data of the abdomen for deep learning: Challenges and strategies for implementation
    S. Park, L.C. Chu, E.K. Fishman, A.L. Yuille, B. Vogelstein,, K.W. Kinzler et al
    Diagn Interv Imaging. 2020 Jan;101(1):35-44.
  • “Conclusions: A reliable data collection/annotation process for abdominal structures was devel- oped. This process can be used to generate large datasets appropriate for deep learning.”
    Annotated normal CT data of the abdomen for deep learning: Challenges and strategies for implementation
    S. Park, L.C. Chu, E.K. Fishman, A.L. Yuille, B. Vogelstein,, K.W. Kinzler et al
    Diagn Interv Imaging. 2020 Jan;101(1):35-44.
  • Annotated normal CT data of the abdomen for deep learning: Challenges and strategies for implementation S. Park, L.C. Chu, E.K. Fishman, A.L. Yuille, B. Vogelstein,, K.W. Kinzler et al Diagn Interv Imaging. 2020 Jan;101(1):35-44.
  • Annotated normal CT data of the abdomen for deep learning: Challenges and strategies for implementation
    S. Park, L.C. Chu, E.K. Fishman, A.L. Yuille, B. Vogelstein,, K.W. Kinzler et al
    Diagn Interv Imaging. 2020 Jan;101(1):35-44.

  • “In conclusion, we developed a reliable and unique data collection and annotation process for abdominal structures using volumetric CT. The collected data can be used to train the deep learning network for automated recognition of normal abdominal organs. The success of this effort was dependent on a multidisciplinary team including radiologists, computer scientists, oncologists, and pathologists that have worked closely together. Pathologists confirmed that the pancreas in all subjects were normal without pancreatic neoplasms or other pathology. Oncologists provided expert guidance in experimental deign and data analysis.”
    Annotated normal CT data of the abdomen for deep learning: Challenges and strategies for implementation
    S. Park, L.C. Chu, E.K. Fishman, A.L. Yuille, B. Vogelstein,, K.W. Kinzler et al
    Diagn Interv Imaging. 2020 Jan;101(1):35-44.

  • Assessing Radiology Research on Artificial Intelligence:
    A Brief Guide for Authors, Reviewers, and Readers—From the Radiology Editorial Board
    David A.Bluemke et al.
    Radiology 2019; (in press) https://doi.org/10.1148/radiol.2019192515

  • Application of Deep Learning to Pancreatic Cancer Detection: Lessons Learned From Our Initial Experience.
    Chu LC, Park S, Kawamoto S, Wang Y, Zhou Y, Shen W, Zhu Z, Xia Y, Xie L, Liu F, Yu Q, Fouladi DF, Shayesteh S, Zinreich E, Graves JS, Horton KM, Yuille AL, Hruban RH, Kinzler KW, Vogelstein B, Fishman EK.
    J Am Coll Radiol. 2019 Sep;16(9 Pt B):1338-1342
  • “There is a common perception that one can simply provide any number of unprocessed cases to the computer, and AI can then easily perform the discovery or classification task. This approach is referred to as unsupervised learning, in which the deep-learning algorithm is presented with unlabeled data and learns to group the data by similarities or differences. Although this approach is plausible, complex image analysis, such as the detection of pancreatic cancer, may require supervised learning to achieve acceptable results.”
    Application of Deep Learning to Pancreatic Cancer Detection: Lessons Learned From Our Initial Experience.
    Chu LC, Park S, Kawamoto S, et al
    J Am Coll Radiol. 2019 Sep;16(9 Pt B):1338-1342
  • In supervised learning, the algorithm is provided with labeled data, referred to as ground truth, which is used as feedback to improve the algorithm during each iteration. The degree of data labeling can range from a per case level of normal versus abnormal to more detailed labeling in which the boundaries of each region of interest are drawn on the image on every image slice; this boundary drawing is referred to as “segmentation.” Because we have chosen to tackle a difficult AI application, we decided that supervised learning with high- quality input data would yield the best chance of success.
    Application of Deep Learning to Pancreatic Cancer Detection: Lessons Learned From Our Initial Experience.
    Chu LC, Park S, Kawamoto S, et al
    J Am Coll Radiol. 2019 Sep;16(9 Pt B):1338-1342

  • Application of Deep Learning to Pancreatic Cancer Detection: Lessons Learned From Our Initial Experience.
    Chu LC, Park S, Kawamoto S, et al
    J Am Coll Radiol. 2019 Sep;16(9 Pt B):1338-1342

  • Application of Deep Learning to Pancreatic Cancer Detection: Lessons Learned From Our Initial Experience.
    Chu LC, Park S, Kawamoto S, et al
    J Am Coll Radiol. 2019 Sep;16(9 Pt B):1338-1342
  • “Our initial decision to train the deep network to recognize all major abdominal organs instead of focusing on the pancreas proved to be a wise investment of time and resources. As we reviewed the false positives, the deep network occasionally predicted the duodenum or jejunum as an exophytic tumor. This was especially problematic in thin patients with poor fat planes. As we trained the deep network to recognize and segment the major abdominal organs, we were able to use this algorithm to prune out false- positive predictions that overlapped with other organs.”
    Application of Deep Learning to Pancreatic Cancer Detection: Lessons Learned From Our Initial Experience.
    Chu LC, Park S, Kawamoto S, et al
    J Am Coll Radiol. 2019 Sep;16(9 Pt B):1338-1342
  • “In the future, we envision that that AI system for automatic PDAC detection will be seamlessly integrated into the radiology workflow as a “second reader,” similar to how computer-aided diagnosis operates in mammographic screening. The AI system will directly receive the CT data sets from the PACS, automatically segment the abdominal organs, and annotate any suspicious pancreatic pathology. These annotated cases will be sent back to the PACS for the radiologist to review. The “second reader” can improve diagnostic confidence and has the potential to identify subtle cases that can be missed by a busy radiologist. By increasing the sensitivity and accuracy of PDAC detection, AI- integrated workflow has the potential to significantly improve patient outcomes. As radiologists, we should not sit on the sidelines. Instead, we should actively engage the AI revolution, hoping to enhance our efficiency and reduce our errors, eventually improving patient outcomes.”
    Application of Deep Learning to Pancreatic Cancer Detection: Lessons Learned From Our Initial Experience.
    Chu LC, Park S, Kawamoto S, et al
    J Am Coll Radiol. 2019 Sep;16(9 Pt B):1338-1342
  • “In the future, we envision that that AI system for automatic PDAC detection will be seamlessly integrated into the radiology workflow as a “second reader,” similar to how computer-aided diagnosis operates in mammographic screening. The AI system will directly receive the CT data sets from the PACS, automatically segment the abdominal organs, and annotate any suspicious pancreatic pathology. These annotated cases will be sent back to the PACS for the radiologist to review. The “second reader” can improve diagnostic confidence and has the potential to identify subtle cases that can be missed by a busy radiologist.”
    Application of Deep Learning to Pancreatic Cancer Detection: Lessons Learned From Our Initial Experience.
    Chu LC, Park S, Kawamoto S, et al
    J Am Coll Radiol. 2019 Sep;16(9 Pt B):1338-1342
  • “We aim at segmenting a wide variety of organs, including tiny targets (e.g., adrenal gland) and neoplasms (e.g., pancreatic cyst), from abdominal CT scans. This is a challenging task in two aspects. First, some organs (e.g., the pancreas), are highly variable in both anatomy and geometry, and thus very difficult to depict. Second, the neoplasms often vary a lot in its size, shape, as well as its location within the organ. Third, the targets (organs and neoplasms) can be considerably small compared to the human body, and so standard deep networks for segmentation are often less sensitive to these targets and thus predict less accurately especially around their boundaries.”
    Recurrent Saliency Transformation Network for Tiny Target Segmentation in Abdominal CT Scans
    Lingxi Xie, Qihang Yu, Yan Wang, Yuyin Zhou, Elliot K. Fishman, and Alan L. Yuille
    IEEE Trans Med Imaging. 2019 Jul 23. doi: 10.1109/TMI.2019.2930679
  • In this paper, we present an end-to-end framework named Recurrent Saliency Transformation Network (RSTN) for seg- menting tiny and/or variable targets. RSTN is a coarse-to-fine approach, which uses prediction from the first (coarse) stage to shrink the input region for the second (fine) stage. A saliency transformation module is inserted between these two stages, so that (i) the coarse-scaled segmentation mask can be transferred as spatial weights and applied to the fine stage; and (ii) the gradients can be back-propagated from the loss layer to the entire network, so that the two stages are optimized in a joint manner. In the testing stage, we perform segmentation iteratively to improve accuracy.
    Recurrent Saliency Transformation Network for Tiny Target Segmentation in Abdominal CT Scans
    Lingxi Xie, Qihang Yu, Yan Wang, Yuyin Zhou, Elliot K. Fishman, and Alan L. Yuille
    IEEE Trans Med Imaging. 2019 Jul 23. doi: 10.1109/TMI.2019.2930679
  • “In this extended journal paper, we allow a gradual optimization to improve the stability of RSTN, and introduce a hierarchical version named H-RSTN to segment tiny and variable neoplasms such as pancreatic cysts. Experiments are performed on several CT datasets, including a public pancreas segmentation dataset, our own multi-organ dataset, and a cystic pancreas dataset. In all these cases, RSTN outperforms the baseline (a stage-wise coarse-to-fine approach) significantly. Confirmed by the radiologists in our team, these promising segmentation results can help early diagnosis of pancreatic cancer.”
    Recurrent Saliency Transformation Network for Tiny Target Segmentation in Abdominal CT Scans
    Lingxi Xie, Qihang Yu, Yan Wang, Yuyin Zhou, Elliot K. Fishman, and Alan L. Yuille
    IEEE Trans Med Imaging. 2019 Jul 23. doi: 10.1109/TMI.2019.2930679
  • “Motivated by the above, we propose a Recurrent Saliency Transformation Network (RSTN) for segmenting very small targets. The chief innovation lies in the mechanism to relate the coarse and fine stages with a saliency transformation module, which repeatedly transforms the segmentation probability map as spatial weights, from the previous iterations to the current iteration. In the training process, the differentiability of this module makes it possible to optimize the coarse-scaled and fine-scaled networks in a joint manner, so that the overall mod- el gets improved after being aware of a global optimization goal.”
    Recurrent Saliency Transformation Network for Tiny Target Segmentation in Abdominal CT Scans
    Lingxi Xie, Qihang Yu, Yan Wang, Yuyin Zhou, Elliot K. Fishman, and Alan L. Yuille
    IEEE Trans Med Imaging. 2019 Jul 23. doi: 10.1109/TMI.2019.2930679

  • Recurrent Saliency Transformation Network for Tiny Target Segmentation in Abdominal CT Scans
    Lingxi Xie, Qihang Yu, Yan Wang, Yuyin Zhou, Elliot K. Fishman, and Alan L. Yuille
    IEEE Trans Med Imaging. 2019 Jul 23. doi: 10.1109/TMI.2019.2930679

  • Recurrent Saliency Transformation Network for Tiny Target Segmentation in Abdominal CT Scans
    Lingxi Xie, Qihang Yu, Yan Wang, Yuyin Zhou, Elliot K. Fishman, and Alan L. Yuille
    IEEE Trans Med Imaging. 2019 Jul 23. doi: 10.1109/TMI.2019.2930679

  • Recurrent Saliency Transformation Network for Tiny Target Segmentation in Abdominal CT Scans
    Lingxi Xie, Qihang Yu, Yan Wang, Yuyin Zhou, Elliot K. Fishman, and Alan L. Yuille
    IEEE Trans Med Imaging. 2019 Jul 23. doi: 10.1109/TMI.2019.2930679
  • “We present the Recurrent Saliency Transformation Network, which enjoys three advantages. (i) Benefited by a (recurrent) global energy function, it is easier to generalize our models from training data to testing data. (ii) With joint optimization over two networks, both of them get improved individually. (iii) By incorporating multi-stage visual cues, more accurate segmentation results are obtained.”
    Recurrent Saliency Transformation Network for Tiny Target Segmentation in Abdominal CT Scans
    Lingxi Xie, Qihang Yu, Yan Wang, Yuyin Zhou, Elliot K. Fishman, and Alan L. Yuille
    IEEE Trans Med Imaging. 2019 Jul 23. doi: 10.1109/TMI.2019.2930679
  • ”We aim at segmenting a wide variety of organs, including tiny targets (e.g., adrenal gland) and neoplasms (e.g., pancreatic cyst), from abdominal CT scans. This is a challenging task in two aspects. First, some organs (e.g., the pancreas), are highly variable in both anatomy and geometry, and thus very difficult to depict. Second, the neoplasms often vary a lot in its size, shape, as well as its location within the organ. Third, the targets (organs and neoplasms) can be considerably small compared to the human body, and so standard deep networks for segmentation are often less sensitive to these targets and thus predict less accurately especially around their boundaries.”
    Recurrent Saliency Transformation Network for Tiny Target Segmentation in Abdominal CT Scans
    Lingxi Xi, Qihang Yu, Yan Wang, Yuyin Zhou, Elliot K. Fishman, and Alan L. Yuille
    IEEE TRANSACTIONS ON MEDICAL IMAGING (in press)
  • “In conclusion, our study provided preliminary evidence that textural features derived from CT images were useful in differential diagnosis of pancreatic mucinous cystadenomas and serous cystadenomas, which may provide a non-invasive approach to determine whether surgery is needed in clinical practice. However, multicentre studies with larger sample size are needed to confirm these results.”
    Discrimination of Pancreatic Serous Cystadenomas From Mucinous Cystadenomas With CT Textural Features: Based on Machine Learning
    Yang J et al.
    Front. Oncol., 12 June 2019 /doi.org/10.3389/fonc.2019.00494
  • Results: Only 31 of 102 serous cystic neoplasm cases in this study were recognized correctly by clinicians before the surgery. Twenty-two features were selected from the radiomics system after 100 bootstrapping repetitions of the least absolute shrinkage selection operator regression. The diagnostic scheme performed accurately and robustly, showing the area under the receiver operating characteristic curve 1⁄4 0.767, sensitivity 1⁄4 0.686, and specificity 1⁄4 0.709. In the independent validation cohort, we acquired similar results with receiver operating characteristic curve 1⁄4 0.837, sensitivity 1⁄4 0.667, and specificity 1⁄4 0.818.
    Conclusion: The proposed radiomics-based computer-aided diagnosis scheme could increase preoperative diagnostic accuracy and assist clinicians in making accurate management decisions.
    Computer-Aided Diagnosis of Pancreas Serous Cystic Neoplasms: A Radiomics Method on Preoperative MDCT Images
    Ran Wei et al.
    Technology in Cancer Research & Treatment
    Volume 18: 1-9; 2019
  • “A total of 17 intensity and texture features were selected, showing difference between SCNs and non-SCNs. Typically, the intensity T-range, wavelet intensity T-median, and wavelet neighborhood gray-tone difference matrix (NGTDM) busyness were the most distinguishable.”
    Computer-Aided Diagnosis of Pancreas Serous Cystic Neoplasms: A Radiomics Method on Preoperative MDCT Images
    Ran Wei et al.
    Technology in Cancer Research & Treatment
    Volume 18: 1-9; 2019
  • “In our retrospective study of 260 patients with PCN, we were surprised to find that the overall preoperative diagnostic accuracy by clinicians was 37.3% (97 of 260), and only 30.4% (31 of 102) of SCN cases were correctly diagnosed. This meant that more than two-thirds of patients with SCN suffered unnecessary pancreatic resection.”
    Computer-Aided Diagnosis of Pancreas Serous Cystic Neoplasms: A Radiomics Method on Preoperative MDCT Images
    Ran Wei et al.
    Technology in Cancer Research & Treatment
    Volume 18: 1-9; 2019
  • “Furthermore, radiomics high-throughput features containing intensity features, texture features, and their wavelet decomposition forms fully utilized image information and obtained more image details that were hard to discover with the naked human eyes.”
    Computer-Aided Diagnosis of Pancreas Serous Cystic Neoplasms: A Radiomics Method on Preoperative MDCT Images
    Ran Wei et al.
    Technology in Cancer Research & Treatment
    Volume 18: 1-9; 2019
  • “In conclusion, our study proposed a radiomics-based CAD scheme and stressed the role of radiomics analysis as a novel noninvasive method for improving the preoperative diagnostic accuracy of SCNs. In all, 409 quantitative features were auto- matically extracted, and a feature subset containing the 22 most statistically significant features was selected after 100 boot- strapping repetitions. Our proposed method improved the diag- nostic accuracy and performed well in all metrics, with AUC of 0.767 in the cross-validation cohort and 0.837 in the independent validation cohort. This demonstrated that our CAD scheme could provide a powerful reference for the diagnosis of clinicians to reduce misjudgment and avoid overtreatment.”
    Computer-Aided Diagnosis of Pancreas Serous Cystic Neoplasms: A Radiomics Method on Preoperative MDCT Images
    Ran Wei et al.
    Technology in Cancer Research & Treatment
    Volume 18: 1-9; 2019
  • “In conclusion, our study proposed a radiomics-based CAD scheme and stressed the role of radiomics analysis as a novel noninvasive method for improving the preoperative diagnostic accuracy of SCNs. In all, 409 quantitative features were auto- matically extracted, and a feature subset containing the 22 most statistically significant features was selected after 100 boot- strapping repetitions. Our proposed method improved the diag- nostic accuracy and performed well in all metrics, with AUC of 0.767 in the cross-validation cohort and 0.837 in the independent validation cohort. This demonstrated that our CAD scheme could provide a powerful reference for the diagnosis of clinicians to reduce misjudgment and avoid overtreatment.”
    Computer-Aided Diagnosis of Pancreas Serous Cystic Neoplasms: A Radiomics Method on Preoperative MDCT Images
    Ran Wei et al.
    Technology in Cancer Research & Treatment
    Volume 18: 1-9; 2019
  • “In this paper, we adopt 3D CNNs to segment the pancreas in CT images. Although deep neural networks have been proven to be very effective on many 2D vision tasks, it is still challenging to apply them to 3D applications due to the limited amount of annotated 3D data and limited computational resources. We propose a novel 3D-based coarse- to-fine framework for volumetric pancreas segmentation to tackle these challenges. The proposed 3D-based framework outperforms the 2D counterpart to a large margin since it can leverage the rich spatial information along all three axes.”


    A 3D Coarse-to-Fine Framework for Automatic Pancreas Segmentation 
Zhuotun Zhu, Yingda Xia, Wei Shen, Elliot K. Fishman, Alan L. Yuille
arXiv:1712.00201v1 [cs.CV] 1 Dec 2017 

  • “In this work, we proposed a novel 3D network called “ResDSN” integrated with a coarse-to-fine framework to simultaneously achieve high segmentation accuracy and low time cost. The backbone network “ResDSN” is carefully designed to only have long residual connections for efficient inference. To our best knowledge, we are the first to segment the challenging pancreas using 3D networks which leverage the rich spatial information to achieve the state-of- the-art.”

    
A 3D Coarse-to-Fine Framework for Automatic Pancreas Segmentation 
Zhuotun Zhu, Yingda Xia, Wei Shen, Elliot K. Fishman, Alan L. Yuille
arXiv:1712.00201v1 [cs.CV] 1 Dec 2017 

  • “To address these issues, we propose a concise and effective framework based on 3D deep networks for pancreas segmentation, which can simultaneously achieve high seg- mentation accuracy and low time cost. Our framework is formulated in a coarse-to-fine manner. In the training stage, we first train a 3D FCN from the sub-volumes sampled from an entire CT volume. We call this ResDSN Coarse model, which aims to obtain the rough location of the target pancreas from the whole CT volume by making full use of the overall 3D context. Then, we train another 3D FCN from the sub-volumes sampled only from the ground truth bound- ing boxes of the target pancreas. We call this the ResDSN Fine model, which can refine the segmentation based on the coarse result.”


    A 3D Coarse-to-Fine Framework for Automatic Pancreas Segmentation 
Zhuotun Zhu, Yingda Xia, Wei Shen, Elliot K. Fishman, Alan L. Yuille
arXiv:1712.00201v1 [cs.CV] 1 Dec 2017 

  • “This work is motivated by the difficulty of small organ segmentation. As the target is often small, it is required to 
focus on a local input region, but sometimes the network is confused due to the lack of contextual information. We present the Recurrent Saliency Transformation Network, which enjoys three advantages. (i) Benefited by a (recurrent) global energy function, it is easier to generalize our models from training data to testing data. (ii) With joint optimization over two networks, both of them get improved individually. (iii) By incorporating multi-stage visual cues, more accurate segmentation results are obtained. As the fine stage is less likely to be confused by the lack of contexts, we also observe better convergence during iterations.”


    Recurrent Saliency Transformation Network: Incorporating Multi-Stage Visual Cues for Small Organ Segmentation 
Qihang Yu, Lingxi Xie, Yan Wang, Yuyin Zhou, Elliot K. Fishman, Alan L. Yuille
arXiv:1709.04518v3 [cs.CV] 18 Nov 2017
  • “This paper presents a Recurrent Saliency Transforma- tion Network. The key innovation is a saliency transfor- mation module, which repeatedly converts the segmentation probability map from the previous iteration as spatial weights and applies these weights to the current iteration. This brings us two-fold benefits. In training, it allows joint optimization over the deep networks dealing with different input scales. In testing, it propagates multi-stage visual information throughout iterations to improve segmentation accuracy.”


    Recurrent Saliency Transformation Network: Incorporating Multi-Stage Visual Cues for Small Organ Segmentation 
Qihang Yu, Lingxi Xie, Yan Wang, Yuyin Zhou, Elliot K. Fishman, Alan L. Yuille
arXiv:1709.04518v3 [cs.CV] 18 Nov 2017
  • “Automatic segmentation of an organ and its cystic region is a prerequisite of computer-aided diagnosis. In this paper, we focus on pancreatic cyst segmentation in abdominal CT scan. This task is important and very useful in clinical practice yet challenging due to the low contrast in boundary, the variability in location, shape and the different stages of the pancreatic cancer. Inspired by the high relevance between the location of a pancreas and its cystic region, we introduce extra deep supervision into the segmentation network, so that cyst segmentation can be improved with the help of relatively easier pancreas segmentation.”


    Deep Supervision for Pancreatic Cyst Segmentation in Abdominal CT Scans 
Yuyin Zhou, Lingxi Xie, Elliot K. Fishman, and Alan L. Yuille 
(in) Medical Image Computing and Computer Assisted Intervention − MICCAI 2017
page 222-231
  • “This paper presents the first system for pancreatic cyst segmentation which can work without human assistance on the testing stage. Motivated by the high relevance of a cystic pancreas and a pancreatic cyst, we formulate pancreas segmentation as an explicit variable in the formulation, and introduce deep supervision to assist the network training process. The joint optimization can be factorized into two stages, making our approach very easy to implement. We collect a dataset with 131 pathological cases. Based on a coarse-to-fine segmentation algorithm, our approach produces reasonable cyst segmentation results. It is worth emphasizing that our approach does not require any extra human annotations on the testing stage, which is especially practical in assisting common patients in cheap and periodic clinical applications.”

    
Deep Supervision for Pancreatic Cyst Segmentation in Abdominal CT Scans 
Yuyin Zhou, Lingxi Xie, Elliot K. Fishman, and Alan L. Yuille 
(in) Medical Image Computing and Computer Assisted Intervention − MICCAI 2017
page 222-231
  • “The pancreas is a highly deformable organ that has a shape and location that is greatly influenced by the presence of adjacent struc- tures. This makes automated image analysis of the pancreas extremely challenging. A number of different approaches have been taken to automated pancreas analysis, in- cluding the use of anatomic atlases, the loca- tion of the splenic and portal veins, and state- of-the-art computer science methods such as deep learning.”

    Progress in Fully Automated Abdominal CT Interpretation
Summers RM
AJR 2016; 207:67–79
  • “A recent advance in computer science is the refinement of neural networks, a type of machine learning classifier used to make decisions from data. This refine- ment, known generically as deep learn- ing but more specifically as convolutional neural networks, has shown dramatic improvements in automated intelligence applications. Initially drawing attention for impressive improvements in speech recognition and natural image interpretation, deep learning is now being applied to medical images, as described already in the sections on the pancreas and colitis.” 


    Progress in Fully Automated Abdominal CT Interpretation
Summers RM
AJR 2016; 207:67–79
© 1999-2021 Elliot K. Fishman, MD, FACR. All rights reserved.