Abstract
Convolutional neural networks (CNNs) are vulnerable to adversarial noises, which may result in potentially disastrous consequences in safety or security sensitive systems. This paper proposes a novel mechanism to improve the robustness of medical image classification systems by bringing denoising ability to CNN classifiers with a naturally embedded auto-encoder and high-level feature invariance to general noises. This novel denoising mechanism can be adapted to many model architectures, and therefore can be easily combined with existing models and denoising mechanisms to further improve robustness of CNN classifiers. This proposed method has been confirmed by comprehensive evaluations with two medical image classification tasks.
F.-F. Xue and J. Peng—The authors contribute equally to this paper.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Akhtar, N., Liu, J., Mian, A.: Defense against universal adversarial perturbations. In: CVPR, pp. 3389–3398 (2018)
Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: IEEE Symposium on Security and Privacy, pp. 39–57 (2017)
Codella, N.C.F., et al.: Skin lesion analysis toward melanoma detection: a challenge at the 2017 international symposium on biomedical imaging, hosted by the international skin imaging collaboration. In: ISBI, pp. 168–172 (2018)
Esteva, A., et al.: Dermatologist-level classification of skin cancer with deep neural networks. Nature 542(7639), 115 (2017)
Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: ICLR (2015)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR, pp. 770–778 (2016)
Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial examples in the physical world. arXiv:1607.02533 (2016)
Kurakin, A., Goodfellow, I.J., Bengio, S.: Adversarial machine learning at scale. CoRR: abs/1611.01236 (2016)
Liao, F., Liang, M., Dong, Y., Pang, T., Hu, X., Zhu, J.: Defense against adversarial attacks using high-level representation guided denoiser. In: CVPR, pp. 1778–1787 (2018)
Meng, D., Chen, H.: MagNet: a two-pronged defense against adversarial examples. In: ACM Conference on Computer and Communications Security, pp. 135–147 (2017)
Papernot, N., McDaniel, P., Wu, X., Jha, S., Swami, A.: Distillation as a defense to adversarial perturbations against deep neural networks. In: IEEE Symposium on Security and Privacy, pp. 582–597 (2016)
Paschali, M., Conjeti, S., Navarro, F., Navab, N.: Generalizability vs. robustness: investigating medical imaging networks using adversarial examples. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11070, pp. 493–501. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00928-1_56
Pereira, S., Pinto, A., Alves, V., Silva, C.A.: Brain tumor segmentation using convolutional neural networks in MRI images. IEEE Trans. Med. Imaging 35(5), 1240–1251 (2016)
Szegedy, C., et al.: Intriguing properties of neural networks. In: ICLR (2014)
Tramèr, F., Kurakin, A., Papernot, N., Goodfellow, I., Boneh, D., McDaniel, P.: Ensemble adversarial training: attacks and defenses. arXiv:1705.07204 (2017)
Acknowledgement
This work is supported in part by the National Key Research and Development Plan (grant No. 2018YFC1315402) and by the Guangdong Key Research and Development Plan (grant No. 2019B020228001).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Xue, FF., Peng, J., Wang, R., Zhang, Q., Zheng, WS. (2019). Improving Robustness of Medical Image Diagnosis with Denoising Convolutional Neural Networks. In: Shen, D., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2019. MICCAI 2019. Lecture Notes in Computer Science(), vol 11769. Springer, Cham. https://doi.org/10.1007/978-3-030-32226-7_94
Download citation
DOI: https://doi.org/10.1007/978-3-030-32226-7_94
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-32225-0
Online ISBN: 978-3-030-32226-7
eBook Packages: Computer ScienceComputer Science (R0)