Abstract
Chest X-ray images classification provides an essential way for lung disease diagnosis. However, this task is challenging due to the lack of professional knowledge and high annotation cost on Chest X-ray images. A common solution for medical data annotation is to use Natural Language Processing (NLP) techniques to extract labels from radiology reports. However, due to the complex structure of radiology reports, NLP based annotation will inevitably bring noisy labels into data, making analysis very difficult. Most existing methods seek to train a classification model (such as convolutional neural network) directly on the original data and ignore the noisy labels, which, however, may lead to very limited diagnosis performance. In this work, we propose a novel Bootstrap Knowledge Distillation (BKD) method, which seeks to improve the label qualities gradually, thereby degrade the noise level of the whole dataset. We theoretically analyze that the distribution of distilled labels will gradually approach to the unseen real labels distribution. Extensive experimental results on real-world Chest X-ray datasets demonstrate the effectiveness of the proposed method.
Keywords
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Cai, J., Lu, L., et al.: Iterative attention mining for weakly supervised thoracic disease pattern localization in chest X-rays. In: MICCAI (2018)
Wang, X., Peng, Y., Lu, L., et al.: TieNet: text-image embedding network for common thorax disease classification and reporting in chest X-rays. In: CVPR (2018)
Tang, Y., Wang, X., et al.: Attention-guided curriculum learning for weakly supervised classification and localization of thoracic diseases on chest radiographs. In: MICCAI (2018)
Chartrand, G., Cheng, P.M., Vorontsov, E., et al.: Deep learning: a primer for radiologists. Radiographics 37(7), 2113–2131 (2017)
Irvin, J., Rajpurkar, P., Ko, M., et al.: CheXpert: A Large Chest Radiograph Dataset with Uncertainty Labels and Expert Comparison. arXiv: 1901.07031 (2019)
Wang, X., Peng, Y., Lu, L., et al.: ChestX-ray8: hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In: CVPR 2097–2106 (2017)
Milletari, F., et al.: CFCM: segmentation via coarse to fine context memory. In: MICCAI (2018)
Xie, Y., et al.: Transferable multi-model ensemble for benign-malignant lung nodule classification on chest CT. In: MICCAI (2017)
Raykar, V.C., et al.: Learning from crowds. JMLR 11(4), 1297–1322 (2010)
Yao, L., et al.: Learning to diagnose from scratch by exploiting dependencies among labels. arXiv preprint arXiv:1710.10501 (2017)
Kumar, P., Monika, G., Muktabh, M.S.: Boosted cascaded convnets for multilabel classification of thoracic diseases in chest radiographs. In: ICIAR (2018)
Han, Z., et al.: Towards automatic report generation in spine radiology using weakly supervised framework. In: MICCAI (2018)
Rajpurkar, P., et al.: Chexnet: Radiologist-level pneumonia detection on chest x-rays with deep learning. arXiv preprint arXiv:1711.05225 (2017)
Zhang, C., et al.: Understanding deep learning requires rethinking generalization. arXiv preprint arXiv:1611.03530 (2016)
Vapnik, V., Rauf, I.: Learning using privileged information: similarity control and knowledge transfer. JMLR 16, 2023–2049 (2015)
Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 (2015)
Wong, K.C.L., et al.: Building disease detection algorithms with very small numbers of positive samples. In: MICCAI (2017)
Li, Y., Yang, J., Song, Y., et al.: Learning from noisy labels with distillation. In: CVPR, pp. 1910–1918 (2017)
Lopez-Paz, D., Bottou, L., Schölkopf, B., et al.: Unifying distillation and privileged information. arXiv preprint arXiv:1511.03643 (2015)
Goldberger, J., Ehud, B.-R.: Training deep neural-networks using a noise adaptation layer. In: ICLR (2017)
Han, B., et al.: Co-teaching: Robust training of deep neural networks with extremely noisy labels. In: NeurIPS (2018)
Jiang, L., et al.: Mentornet: Learning data-driven curriculum for very deep neural networks on corrupted labels. arXiv preprint arXiv:1712.05055 (2017)
Veit, A., et al.: Learning from noisy large-scale datasets with minimal supervision. In: CVPR (2017)
Natarajan, N., et al.: Learning with noisy labels. In: NeurIPS (2013)
Reed, S., et al.: Training deep neural networks on noisy labels with bootstrapping. arXiv preprint arXiv:1412.6596 (2014)
Arpit, D., et al.: A closer look at memorization in deep networks. In: ICML (2017)
Pedro, D.: A unified bias-variance decomposition and its applications. In: ICML (2000)
Paszke, A., et al.: Pytorch: Tensors and dynamic neural networks in python with strong gpu acceleration (2017)
Acknowledgements
This work was partially supported by Guangdong University Characteristic Innovation Project (2017WTSCX002), Guangdong Natural Science Foundation Doctoral Research Project (2018A030310365), International Cooperation open Project of State Key Laboratory of Subtropical Building Science, South China University of Technology (2019ZA02), Science and Technology Program of Guangzhou, China under Grants 202007030007.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
A Theoretical Analysis
A Theoretical Analysis
Proposition 1
At each time step t, we assume the variance term of \(D_y\) and \(D_{s_{t - 1}}\) are independent, leading to the distance \(D_{y_t^\lambda }\) is smaller than both the distance \(D_y\) and \(D_{s_{t - 1}}\),
where \(s_{t-1}\) and y are the labels output from model \(f_{t - 1}\) and true labels respectively. \(D_{y_t^\lambda }\) can reach its optimal
if and only if \(\lambda ^* = \frac{D_{s_{t - 1}}}{D_{s_{t - 1}} + D_y}\).
Proof
At time step \(t=0\), we train a model \(f_0\) from a clean dataset \(\mathcal {D}_c\), the expected prediction error can be composed into the bias term and variance term.
where \(\ell (\cdot , \cdot )\) is \(\ell _2\) distance, \(\bar{s}_0 = \mathbb {E}_{\mathcal {D}_\text {test}}[s_0]\). Since the high capacity of CNN model, we make a reasonable assumption that the bias term \(\ell (s_0, y^*)\) is close to zero . Also, the variance term of \(D_y\) and \(D_{s_0}\) is independent. This leads to
Since to Eq. (12), when \(\lambda = \frac{D_{_{t - 1}}}{D_{s_{t - 1}} + D_y}\), \(D_{y_1^\lambda }\) reach its minimum,
At each time step t, we train model \(f_t\) from a dataset \(\mathcal {D}_t = \{(x, y_t^\lambda )\}\). We assume \(\ell (\bar{s}_t, y_t^\lambda )\) is close to zero, where \(s_t = f_t(x)\), \(\bar{s}_t = \mathbb {E}_{\mathcal {D}_\text {test}}[s_t]\), this leads to \(\mathbb {E}_{\mathcal {D}_\text {test}}[s_t] \approx y_t^\lambda \), so that the distance between \(s_t\) and y is approximate to \(D_{y_t^\lambda }\).
and
if and only if \(\lambda ^* = \frac{D_{s_{t - 1}}}{D_{s_{t - 1}} + D_y}\). \(\square \)
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Li, M., Xu, J. (2021). Bootstrap Knowledge Distillation for Chest X-ray Image Classification with Noisy Labelling. In: Peng, Y., Hu, SM., Gabbouj, M., Zhou, K., Elad, M., Xu, K. (eds) Image and Graphics. ICIG 2021. Lecture Notes in Computer Science(), vol 12889. Springer, Cham. https://doi.org/10.1007/978-3-030-87358-5_57
Download citation
DOI: https://doi.org/10.1007/978-3-030-87358-5_57
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-87357-8
Online ISBN: 978-3-030-87358-5
eBook Packages: Computer ScienceComputer Science (R0)