Skip to main content

Advertisement

Log in

A locally-processed light-weight deep neural network for detecting colorectal polyps in wireless capsule endoscopes

  • Special Issue Paper
  • Published:
Journal of Real-Time Image Processing Aims and scope Submit manuscript

Abstract

Wireless capsule endoscopes (WCE) are revolutionary devices for noninvasive inspection of gastrointestinal tract diseases. However, it is tedious and error-prone for physicians to inspect the huge number of captured images. Artificial Intelligence supports computer-aided diagnostic tools to tackle this challenge. Unlike previous research focusing on the application of large deep neural network (DNN) models for processing images that have been saved on the computer, we propose a light-weight DNN model that has the potential of running locally in the WCE. Thus, only images with diseases are transmitted, saving energy on data transmission. Several aspects of the design are presented in detail, including the DNN’s architecture, the loss function, the criterion of true positive, and data augmentation. We explore design parameters of the DNN architecture in several experiments. These experiments use a training dataset of 1222 images and a test dataset with 153 images. The results of our study indicate that our designed DNN has an Average Precision of AP\(_{25} = 91.7\%\) on our test dataset while the parameter storage size is only \(29.1\,\mathrm {KB}\), which is small enough to run locally on a WCE. In addition, the real-time performance of the designed DNN model is tested on an FPGA, completing one image classification in less than \(6.28\,\mathrm {ms}\), which is much less than the \(167\,\mathrm {ms}\) needed to achieve real-time operation on the WCE. We conclude that our DNN model possesses significant advantages over previous models for WCEs, in terms of model size and real-time performance.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. Blanes-Vidal, V., Baatrup, G., Nadimi, E.S.: Addressing priority challenges in the detection and assessment of colorectal polyps from capsule endoscopy and colonoscopy in colorectal cancer screening using machine learning. Acta Oncol. 58(1), S29–S36 (2019)

    Article  Google Scholar 

  2. Panescu, D.: An imaging pill for gastrointestinal endoscopy. IEEE Eng. Med. Biol. Mag. 24(4), 12–14 (2005)

    Article  Google Scholar 

  3. Liu, G., Yan, G., Kuang, S., Wang, Y.: Detection of small bowel tumor based on multi-scale curvelet analysis and fractal technology in capsule endoscopy. Comput. Biol. Med. 70, 131–138 (2016)

    Article  Google Scholar 

  4. Liedlgruber, M., Uhl, A.: Computer-aided decision support systems for endoscopy in the gastrointestinal tract: a review. IEEE Rev. Biomed. Eng. 4, 73–88 (2011)

    Article  Google Scholar 

  5. Mamonov, A.V., Figueiredo, I.N., Figueiredo, P.N., Tsai, Y.-H.R.: Automated polyp detection in colon capsule endoscopy. IEEE Trans. Med. Imaging 33, 1488–1502 (2014)

    Article  Google Scholar 

  6. Yuan, Y., Li, B., Meng, M.Q.H.: Improved bag of feature for automatic polyp detection in wireless capsule endoscopy images. IEEE Trans. Autom. Sci. Eng. 13, 529–535 (2016)

    Article  Google Scholar 

  7. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems (2012)

  8. Gao, J., Westergaard, J.C., Sundmark, E.R., Bagge, M., Liljeroth, E., Alexandersson, E.: Automatic late blight lesion recognition and severity quantification based on field imagery of diverse potato genotypes by deep learning. Knowl. Based Syst. 214, 106723 (2021)

    Article  Google Scholar 

  9. Kundu, A.K., Fattah, S.A.: Probability density function based modeling of spatial feature variation in capsule endoscopy data for automatic bleeding detection. Comput. Biol. Med., 103478 (2019)

  10. Li, P., Li, Z., Gao, F., Wan, L., Yu, J.: Convolutional neural networks for intestinal hemorrhage detection in wireless capsule endoscopy images. In: 2017 IEEE International Conference on Multimedia and Expo (ICME), pp. 1518–1523, July (2017)

  11. Ghosh, T., Li, L., Chakareski, J.: Effective deep learning for semantic segmentation based bleeding zone detection in capsule endoscopy images. In: 2018 25th IEEE International Conference on Image Processing (ICIP), pp. 3034–3038 (2018)

  12. Jia, X., Meng, M.Q.: A study on automated segmentation of blood regions in wireless capsule endoscopy images using fully convolutional networks. In: 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017), pp. 179–182 (2017)

  13. Aoki, T., Yamada, A., Aoyama, K., Saito, H., Tsuboi, A., Nakada, A., Niikura, R., Fujishiro, M., Oka, S., Ishihara, S., Matsuda, T., Tanaka, S., Koike, K., Tada, T.: Automatic detection of erosions and ulcerations in wireless capsule endoscopy images based on a deep convolutional neural network. Gastrointest. Endosc. 89(2), 357-363.e2 (2019)

    Article  Google Scholar 

  14. Lee, C., Min, J., Cha, J., Lee, S.: Feature space extrapolation for ulcer classification in wireless capsule endoscopy images. In: 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), pp. 100–103 (2019)

  15. Zhou, T., Han, G., Li, B.N., Lin, Z., Ciaccio, E.J., Green, P.H., Qin, J.: Quantitative analysis of patients with celiac disease by video capsule endoscopy: a deep learning method. Comput. Biol. Med. 85, 1–6 (2017)

    Article  Google Scholar 

  16. He, J., Wu, X., Jiang, Y., Peng, Q., Jain, R.: Hookworm detection in wireless capsule endoscopy images with deep learning. IEEE Trans. Image Process. 27(5), 2379–2392 (2018)

    Article  MathSciNet  Google Scholar 

  17. Nadimi, E.S., Buijs, M., Herp, J., Krøijer, R., Kobaek-Larsen, M., Nielsen, E., Pedersen, C.D., Blanes-Vidal, V., Baatrup, G.: Application of deep learning for autonomous detection and localization of colorectal polyps in wireless colon capsule endoscopy. Comput. Electr. Eng., 81(7) (2020)

  18. Yang, W., Cao, Y., Zhao, Q., Ren, Y., Liao, Q.: Lesion classification of wireless capsule endoscopy images. In: 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), pp. 1238–1242 (2019)

  19. Sadasivan, V.S., Seelamantula, C.S.: High accuracy patch-level classification of wireless capsule endoscopy images using a convolutional neural network. In: 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), pp. 96–99 (2019)

  20. Jani, K., Srivastava, R., Srivastava, S., Anand, A.: Computer aided medical image analysis for capsule endoscopy using conventional machine learning and deep learning. In: 2019 7th International Conference on Smart Computing Communications (ICSCC), pp. 1–5 (2019)

  21. Chen, H., Wu, X., Tao, G., Peng, Q.: Automatic content understanding with cascaded spatial temporal deep framework for capsule endoscopy videos. Neurocomputing 229, 77–87 (2017). ((Advances in computing techniques for big medical image data))

    Article  Google Scholar 

  22. Giordano, D., Murabito, F., Palazzo, S., Pino, C., Spampinato, C.: An AI-based framework for supporting large scale automated analysis of video capsule endoscopy. In: 2019 IEEE EMBS International Conference on Biomedical Health Informatics (BHI), pp. 1–4 (2019)

  23. Tashk, A., Nadimi, E.: An innovative polyp detection method from colon capsule endoscopy images based on a novel combination of RCNN and DRLSE. In: 2020 IEEE Congress on Evolutionary Computation (CEC), pp. 1–6 (2020)

  24. Dundar, A., Jin, J., Martini, B., Culurciello, E.: Embedded streaming deep neural networks accelerator with applications. IEEE Trans. Neural Netw. Learn. Syst. 28(7), 1572–1583 (2017)

    Article  MathSciNet  Google Scholar 

  25. Venieris, S.I., Bouganis, C.: fpgaconvnet: A framework for mapping convolutional neural networks on FPGAS. In: 2016 IEEE 24th Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM), pp. 40–47 (2016)

  26. Neil, D., Liu, S.: Minitaur, an event-driven FPGA-based spiking network accelerator. IEEE Trans. VLSI Syst. 22(12), 2621–2628 (2014)

    Article  Google Scholar 

  27. Han, S., Mao, H., Dally, W.J.: Deep compression: Compressing deep neural network with pruning, trained quantization and Huffman coding (2015) CoRR. arXiv:1510.00149

  28. Han, S., Liu, X., Mao, H., Pu, J., Pedram, A., Horowitz, M., Dally, B.: Deep compression and eie: Efficient inference engine on compressed deep neural network. In: 2016 IEEE Hot Chips 28 Symposium (HCS), pp. 1–6 (2016)

  29. Courbariaux, M., Hubara, I., Soudry, D., El-Yaniv, R., Bengio, Y.: Binarized neural networks: Training deep neural networks with weights and activations constrained to + 1 or \(-\) 1 (2016)

  30. Carron, I.: Xnor-net: Imagenet classification using binary convolutional neural networks (2016)

  31. Umuroglu, Y., Fraser, N.J., Gambardella, G., Blott, M., Leong, P.H.W., Jahre, M., Vissers, K.A.: Finn:.n x n a framework for fast, scalable binarized neural network inference. In: FPGA (2017)

  32. Iandola, F.N., Moskewicz, M.W., Ashraf, K., Han, S., Dally, W.J., Keutzer, K.: Squeezenet: Alexnet-level accuracy with 50x fewer parameters and \(<\) 1MB model size (2017). arXiv:1602.07360

  33. Pradeep, K., Kamalavasan, K., Natheesan, R., Pasqual, A.: Edgenet: Squeezenet like convolution neural network on embedded FPGA. In: 2018 25th IEEE International Conference on Electronics, Circuits and Systems (ICECS), pp. 81–84 (2018)

  34. Howard, A.G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., Adam, H.: Mobilenets: Efficient convolutional neural networks for mobile vision applications (2017). arXiv:1704.04861

  35. Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: Unified, real-time object detection. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 779–788 (2016)

  36. Redmon, J., Farhadi, A.: Yolo9000: Better, faster, stronger. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6517–6525 (2017)

  37. Redmon, J., Farhadi, A.: Yolov 3 : An incremental improvement (2018)

  38. Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift (2015) CoRR. arXiv:1502.03167

  39. Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.Y., Berg, .C.: Ssd: Single shot multibox detector. In: Lecture Notes in Computer Science, pp. 21–37 (2016)

  40. Ren, S., He, K., Girshick, R.B., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks (2015) CoRR. arXiv:1506.01497

  41. Bottger, T., Steger, C.: Accurate and robust tracking of rigid objects in real time. J. Real Time Image Process. (2020)

  42. Zhou, H., Peng, J., Liao, C., Li, J.: Application of deep learning model based on image definition in real-time digital image fusion. J. Real Time Image Process. 17, 643–654 (2020)

    Article  Google Scholar 

  43. Li, X., Yirui, W., Zhang, W., Wang, R., Hou, F.: Deep learning methods in real-time image super-resolution: a survey. J. Real Time Image Process. (2019)

  44. Wang, X., Zhang, W., Xuncheng, W., Xiao, L., Qian, Y., Fang, Z.: Real-time vehicle type classification with deep convolutional neural networks. J. Real Time Image Process. 16, 5–14 (2019)

    Article  Google Scholar 

  45. Shen, T., Gao, C., Dawei, X.: The analysis of intelligent real-time image recognition technology based on mobile edge computing and deep learning. J. Real Time Image Process. (2020)

  46. A recorded video about running dl in fpga for polyp detection in real time. https://drive.google.com/file/d/1p_Kke-9KiZzqQU7Z5l7pUCD7Vt2nHfb2/view?usp=sharing

Download references

Acknowledgements

The authors would like to thank the financial support from Louis-Hansens Fond, Denmark.

Funding

This document is the result of a research project funded by the Louis-Hansens Fond Denmark.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yunlong Wang.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, Y., Yoo, S., Braun, JM. et al. A locally-processed light-weight deep neural network for detecting colorectal polyps in wireless capsule endoscopes. J Real-Time Image Proc 18, 1183–1194 (2021). https://doi.org/10.1007/s11554-021-01126-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11554-021-01126-7

Keywords

Navigation