Skip to main content

Parameter Transfer Learning Measured by Image Similarity to Detect CT of COVID-19

  • Conference paper
  • First Online:
Bioinformatics Research and Applications (ISBRA 2021)

Part of the book series: Lecture Notes in Computer Science ((LNBI,volume 13064))

Included in the following conference series:

  • 1628 Accesses

Abstract

COVID-19 has spread throughout the world since 2019, and the epidemic has placed huge demands on the detection performance of COVID-19. A ParNet model is proposed in this paper which uses parameter transfer learning to initialize the training weights trained on ImageNet and then verifies its rationality from the theoretical aspect through four ways including cosine similarity, image average Hash, perceptual Hash, and difference Hash. Four ways measure image similarity from different angles. In this paper, the parallel channel and spatial attention mechanism is used to replace the channel attention mechanism, and the Swish activation function is used to replace the ReLU activation function to improve the performance of ParNet. This paper proposes ParNet to detect CT of Covid-19. Compared with the classic and the state-of-the-art models, ParNet has better performance. Source code is publicly available.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Ning, W., et al.: Open resource of clinical data from patients with pneumonia for the prediction of COVID-19 outcomes via deep learning. Nat. Biomed. Eng. 4(12), 1–11 (2020)

    Article  Google Scholar 

  2. Konar, D., et al.: Auto-diagnosis of COVID-19 using lung CT images with semi-supervised shallow learning network. IEEE Access 99, 1 (2021)

    Google Scholar 

  3. Ahuja, S., Panigrahi, B.K., Dey, N., Rajinikanth, V., Gandhi, T.K.: Deep transfer learning-based automated detection of COVID-19 from lung CT scan slices. Appl. Intell. 51(1), 571–585 (2020). https://doi.org/10.1007/s10489-020-01826-w

    Article  Google Scholar 

  4. Perumal, V., Narayanan, V., Rajasekar, S.J.S.: Detection of COVID-19 using CXR and CT images using transfer learning and haralick features. Appl. Intell. 51(1), 341–358 (2020). https://doi.org/10.1007/s10489-020-01831-z

    Article  Google Scholar 

  5. Zhou, T., Lu, H., Yang, Z., Qiu, S., Huo, B., Dong, Y.: The ensemble deep learning model for novel covid-19 on CT images. Appl. Soft Comput. 98, 106885 (2021)

    Google Scholar 

  6. Shalbaf, A., Vafaeezadeh, M.: Automated detection of COVID-19 using ensemble of transfer learning with deep convolutional neural network based on CT scans. Int. J. Comput. Assist. Radiol. Surg. 16(1), 115–123 (2021)

    Google Scholar 

  7. Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7132–7141 (2018)

    Google Scholar 

  8. Hu, J., Shen, L., Albanie, S., Sun, G., Vedaldi, A.: Gather-excite: Exploiting feature context in convolutional neural networks. arXiv preprint arXiv:1810.12348

  9. Roy, A.G., Navab, N., Wachinger, C.: Concurrent spatial and channel ‘Squeeze & Excitation’ in fully convolutional networks. In: Frangi, A., Schnabel, J., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) Medical Image Computing and Computer Assisted Intervention – MICCAI 2018. MICCAI 2018. Lecture Notes in Computer Science, vol. 11070, pp. 421-429. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00928-1_48

  10. Sibi, P., Jones, S.A., Siddarth, P.: Analysis of different activation functions using back propagation neural networks. J. Theor. Appl. Inf. Technol. 47(3), 1264–1268 (2013)

    Google Scholar 

  11. Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. In: Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, pp. 315–323. JMLR Workshop and Conference Proceedings (2011)

    Google Scholar 

  12. Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. In: Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics. JMLR Workshop and Conference Proceedings, pp. 315–323 (2011)

    Google Scholar 

  13. Szegedy, C., et al.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9 (2015)

    Google Scholar 

  14. Howard, A., et al.: Searching for mobilenetv3. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1314–1324 (2019)

    Google Scholar 

  15. Pan, S.J., Yang, Q.: A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 22(10), 1345–1359 (2009)

    Google Scholar 

  16. Nguyen, H.V., Bai, L.: Cosine similarity metric learning for face verification. In: Kimmel, R., Klette, R., Sugimoto, A. (eds.) ACCV 2010. LNCS, vol. 6493, pp. 709–720. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-19309-5_55

    Chapter  Google Scholar 

  17. Jiaheng, H., Xiaowei, L., Benhui, C., Dengqi, Y.: A comparative study on image similarity algorithms based on hash. J. Dali Univ. 2(12), 32 (2017)

    Google Scholar 

  18. Weng, L., Preneel, B.: A secure perceptual hash algorithm for image content authentication. In: De Decker, B., Lapon, J., Naessens, V., Uhl, A. (eds.) Communications and Multimedia Security, pp. 108–121. Springer Berlin Heidelberg, Heidelberg (2011). https://doi.org/10.1007/978-3-642-24712-5_9

    Chapter  Google Scholar 

  19. Wang, X., Pang, K., Zhou, X., Zhou, Y., Li, L., Xue, J.: A visual model-based perceptual image hash for content authentication. IEEE Trans. Inf. Forensics Secur. 10(7), 1336–1349 (2015)

    Article  Google Scholar 

  20. Shao, L., Zhu, F., Li, X.: Transfer learning for visual categorization: a survey. IEEE Trans. Neural Netw. Learn. Syst. 26(5), 1019–1034 (2015)

    Article  Google Scholar 

  21. Zhuang, F., et al.: A comprehensive survey on transfer learning. Proc. IEEE 109(1), 43–76 (2021)

    Google Scholar 

Download references

Acknowledgments

We thank anonymous reviewers for valuable suggestions and comments. This work was supported by the National Natural Science Foundation of China (62062067), the Natural Science Foundation of Yunnan Province(2017FA032), and the Training Plan for Young and Middle-aged Academic Leaders of Yunnan Province (2018HB031).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shunfang Wang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhao, C., Wang, S. (2021). Parameter Transfer Learning Measured by Image Similarity to Detect CT of COVID-19. In: Wei, Y., Li, M., Skums, P., Cai, Z. (eds) Bioinformatics Research and Applications. ISBRA 2021. Lecture Notes in Computer Science(), vol 13064. Springer, Cham. https://doi.org/10.1007/978-3-030-91415-8_23

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-91415-8_23

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-91414-1

  • Online ISBN: 978-3-030-91415-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics