Abstract
The classification of esophageal disease based on gastroscopic images is important in the clinical treatment, and is also helpful in providing patients with follow-up treatment plans and preventing lesion deterioration. In recent years, deep learning has achieved many satisfactory results in gastroscopic image classification tasks. However, most of them need a training set that consists of large numbers of images labeled by experienced experts. To reduce the image annotation burdens and improve the classification ability on small labeled gastroscopic image datasets, this study proposed a novel semi-supervised efficient contrastive learning (SSECL) classification method for esophageal disease. First, an efficient contrastive pair generation (ECPG) module was proposed to generate efficient contrastive pairs (ECPs), which took advantage of the high similarity features of images from the same lesion. Then, an unsupervised visual feature representation containing the general feature of esophageal gastroscopic images is learned by unsupervised efficient contrastive learning (UECL). At last, the feature representation will be transferred to the down-stream esophageal disease classification task. The experimental results have demonstrated that the classification accuracy of SSECL is 92.57%, which is better than that of the other state-of-the-art semi-supervised methods and is also higher than the classification method based on transfer learning (TL) by 2.28%. Thus, SSECL has solved the challenging problem of improving the classification result on small gastroscopic image dataset by fully utilizing the unlabeled gastroscopic images and the high similarity information among images from the same lesion. It also brings new insights into medical image classification tasks.








Similar content being viewed by others
References
Bray, F., Ferlay, J., Siegel, I.S.R.L., Torre, L.A. and Ahmedin Jemal. (2018) Global cancer statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA CANCER J CLIN, 68(6):394–424. https://doi.org/10.3322/caac.21492
Hu, Y., Hu, C., Zhang, H., Ping, Y. and Chen, L.Q. (2010) How does the number of resected lymph nodes influence TNM staging and prognosis for esophageal carcinoma? Annals of Surgical Oncology, 17(3):784–90. https://doi.org/10.1245/s10434-009-0818-5
José, M., Arnal, D., Arenas, Á.F. and Arbeloa, Á.L. (2015) Esophageal cancer : Risk factors , screening and endoscopic treatment in Western and Eastern countries. World Journal of Gastroenterology, 21(26):7933–43. https://doi.org/10.3748/wjg.v21.i26.7933
Huang, F.L. and Yu, S.J. (2018) Esophageal cancer: Risk factors, genetic association, and treatment. Asian Journal of Surgery, Elsevier Taiwan LLC. 41(3):210–5. https://doi.org/10.1016/j.asjsur.2016.10.005
Mocanu, A., Bârla, R., Hoara, P. and Constantinoiu, S. (2015) Current endoscopic methods of radical therapy in early esophageal cancer. Journal of Medicine and Life p 150–6.
Guo, L.J., Xiao, X., Wu, C.C., Zeng, X., Zhang, Y., Du, J. et al. (2020) Real-time automated diagnosis of precancerous lesions and early esophageal squamous cell carcinoma using a deep learning model (with videos). Gastrointestinal Endoscopy, American Society for Gastrointestinal Endoscopy. 91(1):41–51. https://doi.org/10.1016/j.gie.2019.08.018
Huang, L.M., Yang, W.J., Huang, Z.Y., Tang, C.W. and Li, J. (2020) Artificial intelligence technique in detection of early esophageal cancer. World Journal of Gastroenterology, 26(39):5959–69. https://doi.org/10.3748/wjg.v26.i39.5959
Pang, X., Zhao, Z. and Weng, Y. (2021) The role and impact of deep learning methods in computer-aided diagnosis using gastrointestinal endoscopy. Diagnostics, 11(4):694.
Du, W., Rao, N., Liu, D., Jiang, H., Luo, C., Li, Z. et al. (2019) Review on the applications of deep learning in the analysis of gastrointestinal endoscopy images. IEEE Access, IEEE. 7142053–69. https://doi.org/10.1109/ACCESS.2019.2944676
Liu, D.Y., Gan, T., Rao, N.N., Xing, Y.W., Zheng, J., Li, S. et al. (2016) Identification of lesion images from gastrointestinal endoscope based on feature extraction of combinational methods with and without learning process. Medical Image Analysis, Elsevier B.V. 32281–94. https://doi.org/10.1016/j.media.2016.04.007
Riaz, F., Silva, F.B., Ribeiro, M.D. and Coimbra, M.T. (2012) Invariant Gabor texture descriptors for classification of gastroenterology images. IEEE Transactions on Biomedical Engineering, IEEE. 59(10):2893–904. https://doi.org/10.1109/TBME.2012.2212440
van der Sommen, F., Zinger, S., Schoon, E.J. and de With, P.H.N. (2014) Supportive automatic annotation of early esophageal cancer using local gabor and color features. Neurocomputing, Elsevier. 14492–106. https://doi.org/10.1016/j.neucom.2014.02.066
Bernal, J., Tajkbaksh, N., Sanchez, F.J., Matuszewski, B.J., Chen, H., Yu, L. et al. (2017) Comparative validation of polyp detection methods in video colonoscopy: results from the miccai 2015 endoscopic vision challenge. IEEE Transactions on Medical Imaging, 36(6):1231–49. https://doi.org/10.1109/TMI.2017.2664042
Liu, X., Wang, C., Bai, J. and Liao, G. (2020) Fine-tuning pre-trained convolutional neural networks for gastric precancerous disease classification on magnification narrow-band imaging images. Neurocomputing, Elsevier B.V. 392253–67. https://doi.org/10.1016/j.neucom.2018.10.100
Liao, J., Lam, H.K., Jia, G., Gulati, S., Bernth, J., Poliyivets, D. et al. (2021) A case study on computer-aided diagnosis of nonerosive reflux disease using deep learning techniques. Neurocomputing, Elsevier B.V. 445149–66. https://doi.org/10.1016/j.neucom.2021.02.049
Struyvenberg, M.R., de Groof, A.J., van der Putten, J., van der Sommen, F., Baldaque-Silva, F., Omae, M. et al. (2021) A computer-assisted algorithm for narrow-band imaging-based tissue characterization in Barrett’s esophagus. Gastrointestinal Endoscopy, American Society for Gastrointestinal Endoscopy. 93(1):89–98. https://doi.org/10.1016/j.gie.2020.05.050
Nakagawa, K., Ishihara, R., Aoyama, K., Ohmori, M., Nakahira, H., Matsuura, N. et al. (2019) Classification for invasion depth of esophageal squamous cell carcinoma using a deep neural network compared with experienced endoscopists. Gastrointestinal Endoscopy, American Society for Gastrointestinal Endoscopy. 90(3):407–14. https://doi.org/10.1016/j.gie.2019.04.245
Wang, C.C., Chiu, Y.C., Chen, W.L., Yang, T.W., Tsai, M.C. and Tseng, M.H. (2021) Article a deep learning model for classification of endoscopic gastroesophageal reflux disease. International Journal of Environmental Research and Public Health, 18(5):1–14. https://doi.org/10.3390/ijerph18052428
Horie, Y., Yoshio, T., Aoyama, K., Yoshimizu, S., Horiuchi, Y., Ishiyama, A. et al. (2019) Diagnostic outcomes of esophageal cancer by artificial intelligence using convolutional neural networks. Gastrointestinal Endoscopy, Elsevier, Inc. 89(1):25–32. https://doi.org/10.1016/j.gie.2018.07.037
Van Riel, S., Van Der Sommen, F., Zinger, S., Schoon, E.J. and De With, P.H.N. (2018) Automatic detection of early esophageal cancer with CNNS using transfer learning. Proceedings - International Conference on Image Processing, ICIP, Athens, Greece. p. 1383–7. https://doi.org/10.1109/ICIP.2018.8451771
Gour, N. and Khanna, P. (2021) Multi-class multi-label ophthalmological disease detection using transfer learning based convolutional neural network. Biomedical Signal Processing and Control, 66102329. https://doi.org/10.1016/j.bspc.2020.102329
Mahbod, A., Schaefer, G., Wang, C., Ecker, R. and Dorffner, G. (2021) Investigating and exploiting image resolution for transfer learning-based skin lesion classification. 2020 25th International Conference on Pattern Recognition (ICPR), p. 4047–53.
Du, W., Rao, N., Wang, Y., Hu, D. and Yong, J. (2020) Efficient transfer learning used in the classification of gastroscopic images with small dataset. 2020 17th International Computer Conference on Wavelet Active Media Technology and Information Processing, ICCWAMTIP 2020, IEEE, Chengdu. p. 73–6. https://doi.org/10.1109/ICCWAMTIP51612.2020.9317450
Srinidhi, C.L., Ciga, O. and Martel, A.L. (2021) Deep neural network models for computational histopathology: A survey. Medical Image Analysis, Elsevier B.V. 67101813. https://doi.org/10.1016/j.media.2020.101813
Ouali, Y., Hudelot, C. and Tami, M. (2020) An overview of deep semi-supervised learning. ArXiv: 2006.05278, 2020.
He, K., Fan, H., Wu, Y., Xie, S. and Girshick, R. (2020) Momentum contrast for unsupervised visual representation learning. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, p. 9726–35. https://doi.org/10.1109/CVPR42600.2020.00975
Grill, J.B., Strub, F., Altché, F., Tallec, C., Richemond, P.H., Buchatskaya, E. et al. (2020) Bootstrap your own latent a new approach to self-supervised learning. Advances in Neural Information Processing Systems,.
Chen, T., Kornblith, S., Norouzi, M. and Hinton, G. (2020) A simple framework for contrastive learning of visual representations. 37th International Conference on Machine Learning, ICML 2020, p. 1575–85.
Caron, M., Bojanowski, P., Joulin, A. and Douze, M. (2018) Deep clustering for unsupervised learning of visual features. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), p. 139–56. https://doi.org/10.1007/978-3-030-01264-9_9
Liu, Q., Yu, L., Luo, L. and Heng, P.-A. (2020) Semi-supervised medical image classification with relation-driven self-ensembling. IEEE Transactions on Medical Imaging, 39(11):3429–40.
Xie, Y., Zhang, J. and Xia, Y. (2019) Semi-supervised adversarial model for benign–malignant lung nodule classification on chest CT. Medical Image Analysis, Elsevier B.V. 57237–48. https://doi.org/10.1016/j.media.2019.07.004
Chang, H., Han, J., Zhong, C., Snijders, A.M. and Mao, J.H. (2018) Unsupervised transfer learning via multi-scale convolutional sparse coding for biomedical applications. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(5):1182–94. https://doi.org/10.1109/TPAMI.2017.2656884
Pogorelov, K., Randel, K.R., Griwodz, C., Eskeland, S.L., De Lange, T., Johansen, D. et al. (2017) Kvasir: A multi-class image dataset for computer aided gastrointestinal disease detection. Proceedings of the 8th ACM Multimedia Systems Conference, MMSys 2017, p. 164–9. https://doi.org/10.1145/3083187.3083212
Ioffe, S. and Szegedy, C. (2015) Batch normalization: Accelerating deep network training by reducing internal covariate shift. 32nd International Conference on Machine Learning, ICML 2015, 1448–56.
Hinton, G.E. Rectified linear units improve restricted boltzmann machines. International Conference on Machine Learning, ICML 2010, 807–14.
Sun, K.H.X.Z.S.R.J. (2006) Deep residual learning for image recognition. Indian Journal of Chemistry - Section B Organic and Medicinal Chemistry, 45(8):1951–4. https://doi.org/10.1109/CVPR.2016.90
He, K., Girshick, R. and Dollar, P. (2019) Rethinking imageNet pre-training. Proceedings of the IEEE International Conference on Computer Vision, p. 4917–26. https://doi.org/10.1109/ICCV.2019.00502
Zoph, B., Ghiasi, G., Lin, T.-Y., Cui, Y., Liu, H., Cubuk, E.D. et al. (2020) Rethinking Pre-training and Self-training. ArXiv: 2006.06882, 2020.
Van Der Maaten, L. and Hinton, G. (2008) Visualizing data using t-SNE. Journal of Machine Learning Research, 9(February):2579–625.
Du, W., Rao, N., Dong, C., Wang, Y., Hu, D., Zhu, L. et al. (2021) Automatic classification of esophageal disease in gastroscopic images using an efficient channel attention deep dense convolutional neural network. Biomedical Optics Express, 12(6):3066–81.
Funding
This research was supported by the National Natural Science Foundation of China (61720106004 and 61872405) and the Key R&D Project of Sichuan Province (2020YFS0243).
Author information
Authors and Affiliations
Corresponding authors
Ethics declarations
Ethics approval
This article does not contain any studies with human participants performed by any of the authors.
Conflicts of interest/Competing interests
The authors declare that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This article is part of the Topical Collection on Image & Signal Processing
Rights and permissions
About this article
Cite this article
Du, W., Rao, N., Yong, J. et al. Improving the Classification Performance of Esophageal Disease on Small Dataset by Semi-supervised Efficient Contrastive Learning. J Med Syst 46, 4 (2022). https://doi.org/10.1007/s10916-021-01782-z
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10916-021-01782-z