Skip to main content
Log in

Boosting and rectifying few-shot learning prototype network for skin lesion classification based on the internet of medical things

  • Published:
Wireless Networks Aims and scope Submit manuscript

Abstract

The Internet of Medical Things (IoMT), with advances in wireless technologies, has boosted traditional healthcare into smart healthcare. Computer-aided diagnosis technology based on IoMT is thriving with the help of deep learning. However, fully supervised deep learning need to be trained with enough annotated samples, which is difficult in healthcare. Few-shot learning network can be trained with only a small number of annotated samples, which alleviates the difficulty of medical image collection and annotation. We proposed a few-shot prototype network to address the shortage of annotated samples based on IoMT. First, the capability of the feature extractor is enhanced by designing a contrast learning branch. Second, a novel strategy for constructing positive and negative sample pairs is proposed for the contrast learning, which avoids to specifically maintain a sample queue. Third, the contrast learning branch is also used to rectify the corruption samples and refine the category prototype. Finally, the hybrid loss, consisting of prototype loss and contrastive loss, is used to improve the classification accuracy and convergence speed. Our method achieved satisfactory performance on the mini-ISIC-2\(^i\) and mini-ImageNet datasets.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

References

  1. Hay, R. J., Johns, N. E., Williams, H. C., Bolliger, I. W., Dellavalle, R. P., Margolis, D. J., Marks, R., Naldi, L., Weinstock, M. A., Wulf, S. K., & Michaud, C. (2014). The global burden of skin disease in 2010: An analysis of the prevalence and impact of skin conditions. Journal of Investigative Dermatology, 134, 1527–1534.

    Article  Google Scholar 

  2. Atzori, L., Iera, A., & Morabito, G. (2010). The internet of things: A survey. Computer Networks, 54(15), 2787–2805.

    Article  MATH  Google Scholar 

  3. Arora, S. (2020). IoMT (Internet of Medical Things): Reducing cost while improving patient care. IEEE Pulse, 11(5), 24–27.

    Article  Google Scholar 

  4. Pradhan, K., & Chawla, P. (2020). Medical internet of things using machine learning algorithms for lung cancer detection. Journal of Management Analytics, 7(4), 591–623.

    Article  Google Scholar 

  5. Xie, Y., Zhang, J., Xia, Y., & Shen, C. (2020). A mutual bootstrapping model for automated skin lesion segmentation and classification. IEEE Transactions on Medical Imaging, 39(7), 2482–2493.

    Article  Google Scholar 

  6. Liu, Y., Jain, A., Eng, C., Way, D. H., Lee, K., Bui, P., Kanada, K., de Oliveira Marinho, G., Gallegos, J., Gabriele, S., & Gupta, V. (2020). A deep learning system for differential diagnosis of skin diseases. Nature Medicine, 26(6), 900–908.

    Article  Google Scholar 

  7. Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems, 25, 1097–1105.

    Google Scholar 

  8. Lu, J., Gong, P., Ye, J., and Zhang, C. (2020). Learning from Very Few Samples: A Survey. arXiv preprint. arXiv:2009.02653.

  9. Fei-Fei, L., Fergus, R., & Perona, P. (2006). One-shot learning of object categories. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(4), 594–611.

    Article  Google Scholar 

  10. Finn, C., Abbeel, P., & Levine, S. (2017). Model-agnostic meta-learning for fast adaptation of deep networks. In International Conference on Machine Learning (pp.1126-1135).

  11. Nichol, A., Achiam, J., Schulman, J. (2018). On first-order meta-learning algorithms. arXiv preprint arXiv:1803.02999.

  12. Vinyals, O., et al. (2016). Matching networks for one shot learning. arXiv preprint arXiv:1606.04080.

  13. Snell, J., Swersky, K., & Zemel, R.S. (2017). Prototypical networks for few-shot learning. arXiv preprint arXiv:1703.05175.

  14. Sung, F., Yang, Y., Zhang, L., Xiang, T., Torr, P. H., Hospedales, T. M. (2018). Learning to compare: Relation network for few-shot learning. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp.1199-1208).

  15. Tian, Y., Wang, Y., Krishnan, D., Tenenbaum, J. B., Isola, P. (2020). Rethinking few-shot image classification: a good embedding is all you need?. arXiv preprint arXiv:2003.11539.

  16. Mazumder, P., Singh, P., & Namboodiri, V. P. (2021). RNNP: A robust few-shot learning approach. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (pp. 2664-2673).

  17. Jaiswal, A., Babu, A. R., Zadeh, M. Z., Banerjee, D., & Makedon, F. (2021). A survey on contrastive self-supervised learning. Technologies, 9(1), 1–2.

    Google Scholar 

  18. Kang, B., Liu, Z., Wang, X., Yu, F., Feng, J., & Darrell, T. (2019). Few-shot object detection via feature reweighting. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 8420–8429).

  19. Wang, Y. X., & Hebert, M. (2015). Model recommendation: Generating object detectors from few samples. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 1619–1628).

  20. Siam, M., Oreshkin, B., & Jagersand, M. (2019). Adaptive masked proxies for few-shot segmentation. arXiv preprint arXiv:1902.11123.

  21. Tian, Z., Zhao, H., Shu, M., Yang, Z., Li, R., & Jia, J. (2020). Prior guided feature enrichment network for few-shot segmentation. IEEE Annals of the History of Computing, 01, 1.

    Google Scholar 

  22. Xiao, J., Xu, H., Gao, H., Bian, M., Li, Y. (2021). A weakly supervised semantic segmentation network by aggregating seed cues: The multi-object proposal generation perspective. ACM Transactions on Multimedia Computing, Communications, and Applications(TOMM), 17(1s), 1–19.

  23. Chen, J., Ying, H., Liu, X., Jingjing, G., Feng, R., Chen, T., Gao, H., & Jian, W. (2021). A transfer learning based super-resolution microscopy for biopsy slice images: The joint methods perspective. IEEE/ACM Transactions on Computational Biology and Bioinformatics (TCBB), 18(1), 103–113.

    Google Scholar 

  24. Manning, C., & Schutze, H. (1999). Foundations of statistical natural language processing. Cambridge: MIT press.

    MATH  Google Scholar 

  25. Brown, T.B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., and Agarwal, S. (2020). Language models are few-shot learners. arXiv preprint arXiv:2005.14165.

  26. Chen, Z., Fu, Y., Wang, Y. X., Ma, L., Liu, W., Hebert, M. (2019). Image deformation meta networks for one shot learning. In Proceedings of the IEEE Conference on Co mputer Vision and Pat-tern Recognition (pp. 8680–8689).

  27. Isola, P., Zhu, J. Y., Zhou, T., Efros, AA. (2017). Image to image translation with conditional adversarial net-works. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp.1125–1134).

  28. Feng, R., Cao, Y., Liu, X., Chen, T., Chen, J., Chen, D. Z., Gao, H., & Jian, W. (2021). ChroNet: A multi-task learning based approach for prediction of multiple chronic diseases. Multimedia Tools and Applications, 1–15.

  29. Xiao, J., Xu, H., Zhao, W., Cheng, C., & Gao, H. (2021). A Prior-mask-guided Few-shot Learning for Skin Lesion Segmentation, Computing 1–23.

  30. Cao, Z., Wang, W., Zheng, X., Sun, C., Jian, W., & Gao, H. (2021). Multi-modality fusion learning for the automatic diagnosis of optic neuropathy. Pattern Recognition Letters (PRL), 142, 58–64.

    Article  Google Scholar 

  31. Chu, C., Zhmoginov, A., & Sandler, M. (2017). Cyclegan, a master of steganography. arXiv preprint arXiv:1712.02950

  32. Wei, S-E, et al. (2016). Convolutional pose machines. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition (pp. 4724–4732).

  33. Lin, T. Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollar, P., Zitnick, CL. (2014). Microsoft coco: Common objects in context. In European conference on computer vision (pp. 740–745).

  34. Kuznetsova, A., Rom, H., Alldrin, N., Uijlings, J., Krasin, I., Pont-Tuset, J., Kamali, S., Popov, S., Malloci, M., Kolesnikov, A., & Duerig, T. (2020). The open images dataset v4. International Journal of Computer Vision, 128, 1–26.

    Article  Google Scholar 

  35. Kim, J., Kim, T., Kim, S., & Yoo, C. D. (2019). Edge-labeling graph neural network for few-shot learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 11–20).

  36. Ye, H. J., Hu, H., Zhan, D. C., and Sha, F. (2020). Few-shot learning via embedding adaptation with set-to-set functions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 8808–8817).

  37. Jing, L. (2020). & Tian, Yingli. A survey. IEEE transactions on pattern analysis and machine intelligence: Self-supervised visual feature learning with deep neural networks.

  38. Doersch, C., Gupta, A., & Efros, A. A. (2015). Unsupervised visual representation learning by context prediction. In Proceedings of the IEEE international conference on computer vision (pp. 1422–1430).

  39. Gidaris, S., Singh, P., & Komodakis, N. (2018). Unsupervised representation learning by predicting image rotations. arXiv preprint arXiv:1803.07728

  40. Jaiswal, A., Babu, A. R., Zadeh, M. Z., Banerjee, D., & Makedon, F. (2021). A survey on contrastive self-supervised learning. Technologies, 9(1), 2.

    Article  Google Scholar 

  41. He, K., Fan, H., Wu, Y., Xie, S., and Girshick, R. (2020). Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 9729–9738).

  42. Chen, T., Kornblith, S., Norouzi, M., and Hinton, G. (2020). A simple framework for contrastive learning of visual representations. In International conference on machine learning (pp. 1597–1607).

  43. Chaitanya, K., Erdil, E., Karani, N., and Konukoglu, E. (2020). Contrastive learning of global and local features for medical image segmentation with limited annotations. arXiv preprint arXiv:2006.10511.

  44. Chen, G., Chuang, Z., & Qizhou, W. (2020). A survey of label noise robust learning algorithms. Aero Weaponry, 27(3), 20–26.

    Google Scholar 

  45. Goldberger, J, & Ehud B. R. (2016). Training deep neural-networks using a noise adaptation layer.

  46. Ghosh, A., Kumar, H., & Sastry, P. S. (2017). Robust loss functions under label noise for deep neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence 31(1).

  47. Shaban, A., Bansal, S., Liu, Z., Essa, I., and Boots, B. (2017). One-shot learning for semantic segmentation. arXiv preprint arXiv:1709.03410

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to HongHao Gao.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xiao, J., Xu, H., Fang, D. et al. Boosting and rectifying few-shot learning prototype network for skin lesion classification based on the internet of medical things. Wireless Netw 29, 1507–1521 (2023). https://doi.org/10.1007/s11276-021-02713-z

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11276-021-02713-z

Keywords

Navigation