Skip to main content

PAS-Net: Rapid Prediction of Antibiotic Susceptibility from Fluorescence Images of Bacterial Cells Using Parallel Dual-Branch Network

  • Conference paper
  • First Online:
Medical Image Computing and Computer Assisted Intervention – MICCAI 2023 (MICCAI 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14227))

  • 2472 Accesses

Abstract

In recent years, the emergence and rapid spread of multi-drug resistant bacteria has become a serious threat to global public health. Antibiotic susceptibility testing (AST) is used clinically to determine the susceptibility of bacteria to antibiotics, thereby guiding physicians in the rational use of drugs as well as slowing down the process of bacterial resistance. However, traditional phenotypic AST methods based on bacterial culture are time-consuming and laborious (usually 24–72 h). Because delayed identification of drug-resistant bacteria increases patient morbidity and mortality, there is an urgent clinical need for a rapid AST method that allows physicians to prescribe appropriate antibiotics promptly. In this paper, we present a parallel dual-branch network (i.e., PAS-Net) to predict bacterial antibiotic susceptibility from fluorescent images. Specifically, we use the feature interaction unit (FIU) as a connecting bridge to align and fuse the local features from the convolutional neural network (CNN) branch (C-branch) and the global representations from the Transformer branch (T-branch) interactively and effectively. Moreover, we propose a new hierarchical multi-head self-attention (HMSA) module that reduces the computational overhead while maintaining the global relationship modeling capability of the T-branch. PAS-Net is experimented on a fluorescent image dataset of clinically isolated Pseudomonas aeruginosa (PA) with promising prediction performance. Also, we verify the generalization performance of our algorithm in fluorescence image classification on two HEp-2 cell public datasets.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Holmes, A.H., et al.: Understanding the mechanisms and drivers of antimicrobial resistance. Lancet 387, 176–187 (2016)

    Article  Google Scholar 

  2. Dadgostar, P.: Antimicrobial resistance: implications and costs. Infect. Drug Resist. 12, 3903–3910 (2019)

    Article  Google Scholar 

  3. Ferri, M., Ranucci, E., Romagnoli, P., Giaccone, V.: Antimicrobial resistance: a global emerging threat to public health systems. Crit. Rev. Food Sci. Nutr. 57, 2857–2876 (2017)

    Article  Google Scholar 

  4. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  5. Xie, S., Girshick, R., Dollár, P., Tu, Z., He, K.: Aggregated residual transformations for deep neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1492–1500 (2017)

    Google Scholar 

  6. Zhang, H., et al.: Resnest: split-attention networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2736–2746 (2022)

    Google Scholar 

  7. Waisman, A., et al.: Deep learning neural networks highly predict very early onset of pluripotent stem cell differentiation. Stem Cell Rep. 12, 845–859 (2019)

    Article  Google Scholar 

  8. Riasatian, A., et al.: Fine-Tuning and training of densenet for histopathology image representation using TCGA diagnostic slides. Med. Image Anal. 70, 102032 (2021)

    Google Scholar 

  9. Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017)

    Google Scholar 

  10. Dosovitskiy, A., et al.: An image is worth 16 × 16 words: transformers for image recognition at scale. arXiv preprint arXiv:.11929 (2020)

    Google Scholar 

  11. Touvron, H., Cord, M., Douze, M., Massa, F., Sablayrolles, A., Jégou, H.: Training data-efficient image transformers & distillation through attention. In: International Conference on Machine Learning, pp. 10347–10357. PMLR (2021)

    Google Scholar 

  12. Wang, W., et al.: Pyramid vision transformer: a versatile backbone for dense prediction without convolutions. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 568–578 (2021)

    Google Scholar 

  13. Liu, Z., et al.: Swin transformer: hierarchical vision transformer using shifted windows. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 10012–10022 (2021)

    Google Scholar 

  14. Vaswani, A., et al.: Attention is all you need. Adv. neural inf. Process. Syst. 30 (2017)

    Google Scholar 

  15. He, X., Tan, E.-L., Bi, H., Zhang, X., Zhao, S., Lei, B.: Fully transformer network for skin lesion analysis. Med. Image Anal. 77, 102357 (2022)

    Article  Google Scholar 

  16. Liu, Z., Mao, H., Wu, C.Y., Feichtenhofer, C., Darrell, T., Xie, S.: A convnet for the 2020s. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11976–11986 (2022)

    Google Scholar 

  17. Peng, Z., et al.: Conformer: local features coupling global representations for visual recognition. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 367–376 (2021)

    Google Scholar 

  18. Yuan, K., Guo, S., Liu, Z., Zhou, A., Yu, F., Wu, W.: Incorporating convolution designs into visual transformers. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 579–588 (2021)

    Google Scholar 

  19. Gao, Z., Wang, L., Zhou, L., Zhang, J.: HEp-2 cell image classification with deep convolutional neural networks. IEEE j. Biomed. Health Inform. 21, 416–428 (2016)

    Article  Google Scholar 

  20. Phan, H.T.H., Kumar, A., Kim, J., Feng, D.: Transfer learning of a convolutional neural network for HEp-2 cell image classification. In: 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI), pp. 1208–1211. IEEE (2016)

    Google Scholar 

  21. Jia, X., Shen, L., Zhou, X., Yu, S.: Deep convolutional neural network based HEp-2 cell classification. In: 2016 23rd International Conference on Pattern Recognition (ICPR), pp. 77–80. IEEE (2016)

    Google Scholar 

  22. Li, Y., Shen, L.: A deep residual inception network for HEp-2 cell classification. In: Cardoso, M.J., Arbel, T., Carneiro, G., Syeda-Mahmood, T., Tavares, J.M.R.S., Moradi, M., Bradley, A., Greenspan, H., Papa, J.P., Madabhushi, A., Nascimento, J.C., Cardoso, J.S., Belagiannis, V., Lu, Z. (eds.) DLMIA/ML-CDS -2017. LNCS, vol. 10553, pp. 12–20. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-67558-9_2

    Chapter  Google Scholar 

  23. Liu, J., Xu, B., Shen, L., Garibaldi, J., Qiu, G.: HEp-2 cell classification based on a deep autoencoding-classification convolutional neural network. In: 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017), pp. 1019–1023. IEEE (2017)

    Google Scholar 

  24. Lei, H., et al.: A deeply supervised residual network for HEp-2 cell classification via cross-modal transfer learning. Pattern Recogn. 79, 290–302 (2018)

    Article  Google Scholar 

Download references

Acknowledgement

This work was supported National Natural Science Foundation of China (Nos. 62101338, 61871274, 32270196 and U1902209), National Natural Science Foundation of Guangdong Province (2019A1515111205), Shenzhen Key Basic Research Project (KCXFZ20201221173213036, JCYJ20220818095809021, SGDX202011030958020–07, JCYJ201908081556188–06, and JCYJ20190808145011 -259), Shenzhen Peacock Plan Team Project (grants number KQTD20200909113758–004).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Baiying Lei .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Xiong, W., Yu, K., Yang, L., Lei, B. (2023). PAS-Net: Rapid Prediction of Antibiotic Susceptibility from Fluorescence Images of Bacterial Cells Using Parallel Dual-Branch Network. In: Greenspan, H., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2023. MICCAI 2023. Lecture Notes in Computer Science, vol 14227. Springer, Cham. https://doi.org/10.1007/978-3-031-43993-3_56

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-43993-3_56

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-43992-6

  • Online ISBN: 978-3-031-43993-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics