Abstract
Few-shot Learning (FSL) methods are being adopted in settings where data is not abundantly available. This is especially seen in medical domains where the annotations are expensive to obtain. Deep Neural Networks have been shown to be vulnerable to adversarial attacks. This is even more severe in the case of FSL due to the lack of a large number of training examples. In this paper, we provide a framework to make few-shot segmentation models adversarially robust in the medical domain where such attacks can severely impact the decisions made by clinicians who use them. We propose a novel robust few-shot segmentation framework, Prototypical Neural Ordinary Differential Equation (PNODE), that provides defense against gradient-based adversarial attacks. We show that our framework is more robust compared to traditional adversarial defense mechanisms such as adversarial training. Adversarial training involves increased training time and shows robustness to limited types of attacks depending on the type of adversarial examples seen during training. Our proposed framework generalises well to common adversarial attacks like FGSM, PGD and SMIA while having the model parameters comparable to the existing few-shot segmentation models. We show the effectiveness of our proposed approach on three publicly available multi-organ segmentation datasets in both in-domain and cross-domain settings by attacking the support and query sets without the need for ad-hoc adversarial training.
P. Pandey and A. Vardhan—Equal contribution.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Li, F.-F., Rob, F., Pietro, P.: One-shot learning of object categories. IEEE TPAMI, vol. 28 (2006)
Ian, J., Goodfellow, J.S., Szegedy, C.: Explaining and harnessing adversarial examples. In: ICLR (2015)
Landman, B., Xu, Z., Igelsias, J.E., Styner, M., Robin, Thomas, Langerak, A.K.: Miccai multi-atlas labeling beyond the cranial vault-workshop and challenge. In: MICCAI Multi-Atlas Labeling Beyond Cranial Vault-Workshop Challenge (2015)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR (2016)
Kurakin, A., Goodfellow, I.J., Bengio, S.: Adversarial examples in the physical world. In: ICLR (Workshop) (2017)
Snell, J., Swersky, K., Zemel, R.S.: Prototypical networks for few-shot learning. In: NeurIPS (2017)
Ravi, S.: Hugo Larochelle. Optimization as a model for few-shot learning. In: ICLR (2017)
Kurakin, A., Goodfellow, I.J., Bengio, S.: Adversarial machine learning at scale. In: ICLR (2017)
Moosavi-Dezfooli, S.-M., Fawzi, A., Fawzi, O.: Pascal Frossard. Universal adversarial perturbations. In: CVPR (2017)
Xie, C., Wang, J., Zhang, Z., Zhou, Y., Xie, L., Yuille, A.: Adversarial examples for semantic segmentation and object detection. In: ICCV (2017)
Dong, N., Xing, E.P.: Few-shot semantic segmentation with prototype learning. In: BMVC (2018)
Sung, F., Yang, Y., Zhang, L., Xiang, T., Torr, P.H.S., Hospedales, T.M.: Learning to compare: relation network for few-shot learning. In: CVPR (2018)
Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. In: ICLR (2018)
Ricky, T.Q., Chen, Y.R., Bettencourt, J., Duvenaud, D.: Neural ordinary differential equations. In: NeurIPS (2018)
Paschali, M., Conjeti, S., Navarro, F., Navab, N.: Generalizability vs. Investigating medical imaging networks using adversarial examples. In: MICCAI, Robustness (2018)
Arnab, A., Miksik, O., Torr, P.H.S.: On the robustness of semantic segmentation models to adversarial attacks. In: CVPR (2018)
Zhang, H., Yu, Y., Jiao, J., Xing, E., El Ghaoui, L., Jordan, M.: Theoretically principled trade-off between robustness and accuracy. In: ICML (2019)
Zhang, H., Chen, H., Song, Z., Boning, D., Dhillon, I., Hsieh, C.-J.: The limitations of adversarial training and the blind-spot attack. In: ICLR (2019)
Wang, K., Liew, J.H., Zou, Y., Zhou, D., Feng, J.: Panet: few-shot image semantic segmentation with prototype alignment. In: ICCV (2019)
Zhao, A., Balakrishnan, G., Durand, F., Guttag, J.V., Dalca, A.V.: Data augmentation using learned transformations for one-shot medical image segmentation. In: CVPR (2019)
Ouyang, C., Kamnitsas, K., Biffi, C., Duan, J., Rueckert, D.: Data efficient unsupervised domain adaptation for cross-modality image segmentation. In: MICCAI (2019)
Ozbulak, U., Van Messem, A., De Neve, W.: Impact of adversarial examples on deep learning models for biomedical image segmentation. In: MICCAI (2019)
Simpson, A.L., et al.: A large annotated medical image dataset for the development and evaluation of segmentation algorithms. arXiv preprint arXiv:1902.09063 (2019)
Roy, A.G., Siddiqui, S., Pölsterl, S., Navab, N., Wachinger, C.: ‘Squeeze & Excite’ Guided few-shot segmentation of volumetric images. In: MedIA, vol. 59 (2020)
Rister, B., Yi, D., Shivakumar, K., Nobashi, T., Rubin, D.L.: CT-ORG, a new dataset for multiple organ segmentation in computed tomography. Sci. Data (2020). https://doi.org/10.1038/s41597-020-00715-8
Li, X., Wei, T., Chen, Y.P., Tai, Y.-W., Tang, C.-K.: FSS-1000: a 1000-class dataset for few-shot segmentation. In: CVPR (2020)
Yan, H., Du, J., Vincent, Y.F.T., Feng, J.: On robustness of neural ordinary differential equations. In: ICLR (2020)
Liu, X., Xiao, T., Si, S., Cao, Q., Kumar, S., Hsieh, C.-J.: How does noise help robustness? Explanation and exploration under the neural SDE framework. In: CVPR (2020)
Goldblum, M., Fowl, L., Goldstein, T.: A meta-learning approach. In: NeurIPS, Adversarially Robust Few-Shot Learning (2020)
Park, S., So, J.: On the effectiveness of adversarial training in defending against adversarial example attacks for image classification. Appl. Sci. 10(22), 8079 (2020). https://doi.org/10.3390/app10228079
Kang, Q., Song, Y., Ding, Q., Tay, W.P.: Stable neural ode with Lyapunov-stable equilibrium points for defending against adversarial attacks. In: NeurIPS (2021)
Tang, H., Liu, X., Sun, S., Yan, X., Xie, X.: Recurrent mask refinement for few-shot medical image segmentation. In: ICCV (2021)
Qi, G., Gong, L., Song, Y., Ma, K., Zheng, Y.: Stabilized medical image attacks. In: ICLR (2021)
Xiaogang, X., Zhao, H., Jia, J.: Dynamic divide-and-conquer adversarial training for robust semantic segmentation. In: ICCV (2021)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Pandey, P., Vardhan, A., Chasmai, M., Sur, T., Lall, B. (2022). Adversarially Robust Prototypical Few-Shot Segmentation with Neural-ODEs. In: Wang, L., Dou, Q., Fletcher, P.T., Speidel, S., Li, S. (eds) Medical Image Computing and Computer Assisted Intervention – MICCAI 2022. MICCAI 2022. Lecture Notes in Computer Science, vol 13438. Springer, Cham. https://doi.org/10.1007/978-3-031-16452-1_8
Download citation
DOI: https://doi.org/10.1007/978-3-031-16452-1_8
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-16451-4
Online ISBN: 978-3-031-16452-1
eBook Packages: Computer ScienceComputer Science (R0)