Importance of Better Human-Computer Interaction in the Era of Deep Learning: Mammography Computer-Aided Diagnosis as a Use Case.
J Am Coll Radiol. 2018 Jan;15(1 Pt A):49-52. doi: 10.1016/j.jacr.2017.08.027. Epub 2017 Oct 31. Nishikawa RM1, Bae KT2.
As witnessed at last year’s RSNA, the revolution is upon us. Experts tout deep learning as a revolution in radiology. Some radiologists fear for their jobs because some experts suggest deep learning as an infallible replacement for radiologists [1,2]. Radiologists at the forefront of this technology have tried to assure their colleagues that there is nothing to fear and that deep learning will make their lives better by performing tedious repetitive tasks and by providing intelligent decision support tools to enhance radiologists’ real expertise and performances. Although deep learning may indeed provide tools to enhance radiologists’ performances and efficiency, from our perspective, investigators place too much emphasis on developing the tools and not nearly enough effort on the optimum implementation of these tools.
In this opinion article, we focus on the use of deep learning as related to a specific clinical application of radiology image analysis, namely computer-aided diagnosis (CAD). We restrict our argument to CAD, and specifically CAD for mammography, for a number of reasons. First, CAD is an obvious application of deep learning, and there is much activity both academically and commercially. Second, mammography CAD is the most widely used CAD application, and there are several studies evaluating its impact on screening mammography. Although we take a narrow focus here, we assert that this CAD example is relevant to all other applications where using deep learning assists radiologists. We need more research on the man-machine interface in parallel with developing algorithms.