large scale automated reading of frontal and lateral chest ... · classification of thoracic...
TRANSCRIPT
#CMIMI18#CMIMI18
Large Scale Automated Reading of Frontal and Lateral Chest X-Rays using Dual Convolutional Neural
NetworksJonathan Rubin, PhD, Deepan Sanghavi, Claire Zhao, PhD,
Kathy Lee, PhD, Ashequl Qadir, PhD, Minnan Xu-Wilson PhDPhilips Research North America
#CMIMI18
Introduction
Automatically classifying findings of interest within chest radiographs is challenging
Potential use cases: providing secondary reads risk stratification flagging potentially lethal conditions in critical care
Train CNNs to automatically classify 13 thoracic disease categories including atelectasis, cardiomegaly, edema, effusion, infiltrates,
masses, nodules, pneumonia, pneumothorax and others
#CMIMI18
Related Work [1] Li Yao, Eric Poblenz, Dmitry Dagunts, Ben Covington, Devon Bernard, and Kevin Ly- man. Learning to
diagnose from scratch by exploiting dependencies among labels. arXiv preprint arXiv:1710.10501, 2017.
[2] Pranav Rajpurkar, Jeremy Irvin, Kaylie Zhu, Brandon Yang, Hershel Mehta, Tony Duan, Daisy Ding, AartiBagul, Curtis Langlotz, Katie Shpanskaya, et al. Chexnet: Radiologist-level pneumonia detection on chest x-rays with deep learning. arXiv preprint arXiv:1711.05225, 2017.
[3] Pulkit Kumar, Monika Grewal, and Muktabh Mayank Srivastava. Boosted cascaded convnets for multilabelclassification of thoracic diseases in chest radiographs. arXiv preprint arXiv:1711.08760, 2017.
[4] Ivo M Baltruschat, Hannes Nickisch, Michael Grass, Tobias Knopp, and Axel Saalbach. Comparison of deep learning approaches for multi-label chest x-ray classification. arXiv preprint arXiv:1803.02315, 2018.
[5] Qingji Guan, Yaping Huang, Zhun Zhong, Zhedong Zheng, Liang Zheng, and Yi Yang. Diagnose like a radiologist: Attention guided convolutional neural network for thorax disease classification. arXiv preprint arXiv:1801.09927, 2018.
#CMIMI18
Related Work
Severe down-sampling to match pre-trained network dimensions [Rajpurkar et al., 2017; Guan et al., 2018]
Do not distinguish between PA and AP [Yao et al., 2017; Rajpurkar et al., 2017; Guan et al., 2018; Kumar et al., 2017]
Do not split by subject [Guan et al., 2018, Yao et al., 2017]
Do not consider Lateral view[Yao et al., 2017; Rajpurkar et al., 2017; Guan et al., 2018; Kumar et al., 2017]
#CMIMI18
DualNet- simultaneously processes frontal and lateral images together using dual network architecture
#CMIMI18
Dataset
MIMIC-CXRLargest pre-released dataset of CXR images, intended for future dissemination
470,000 chest x-rays (from ~63,000 unique patients)
Frontal and lateral views
DICOM format
Radiology reports available
#CMIMI18
Preprocessing
Train separate PA, AP and Lateral models
Split by subject (i.e. no subject overlap in datasets)
Train (70%) / Validation (10%) / Test (20%)
#CMIMI18
Models
Baseline architecture DenseNet-121
Pre-trained [ImageNet weights]
Network modifications Single channel input
Multi-class, multi-label problem Binary cross-entropy loss function
#CMIMI18
Test-set Results (AUC)Individual: applies separate frontal and lateral CNN models to each image.DualNet: simultaneously processes frontal and lateral images together using the dual network architecture.
#CMIMI18
Conclusions Collection of CNNs evaluated
Trained on a large set of chest x-ray images to recognize thorax diseases.
Processing both frontal and lateral inputs simultaneously leads to improvements compared to applying separately trained baseline classifiers.
AP vs. PA AP more difficult AP images are generally acquired in the intensive care unit where patients are too
sick for an upright PA image to be taken. AP more likely to contain tubes, lines and medical devices etc..
Baseline results Many model improvements are possible