Skip to main content
Log in

Method for real-time automatic setting of ultrasonic image parameters based on deep learning

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

We propose a method for the automatic setting of ultrasonic image parameter values based on deep learning of image classification in this paper. The method first classifies ultrasonic images through a convolutional neural network and then sets gray map and Gain parameters correspondingly to acquire high-quality images. In the classification step, we initially tried to classify the images using GoogLeNet. However, as GoogLeNet has a complicated structure and a low operating speed, this paper proposes a new structure for the convolutional neural network to classify the images. The results show that the customized classification method can result in faster recognition without compromising the performance, thus successfully achieving rapid and automatic setting of ultrasonic image parameters.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

References

  1. Arajo T, Aresta G, Castro E, Rouco J, Aguiar P, Eloy C et al (2017) Classification of breast cancer histology images using convolutional neural networks. PLoS One 12(6):e0177544

    Article  Google Scholar 

  2. Chen W, Liu T, Wang B (2011) Ultrasonic image classification based on support vector machine with two independent component features. Computers & Mathematics with Applications 62(7):2696–2703

    Article  MathSciNet  MATH  Google Scholar 

  3. Chon A, Balachandar N, Lu P (2017) Deep Convolutional Neural Networks for Lung Cancer Detection. tech. rep., Stanford University

  4. Cui K, Qin X (2018) Virtual reality research of the dynamic characteristics of soft soil under metro vibration loads based on BP neural networks. Neural Comput & Applic 29(5):1233–1242

    Article  Google Scholar 

  5. Cui K, Zhao TT (2017) Unsaturated dynamic constitutive model under cyclic loading. Clust Comput 20(4):2869–2879

    Article  MathSciNet  Google Scholar 

  6. Dahl GE, Sainath TN, Hinton GE (2013) Improving deep neural networks for LVCSR using rectified linear units and dropout. Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on. IEEE

  7. Doust BD, Maklad NF (1974) Ultrasonic B-mode examination of the gallbladder: Technique and criteria for the diagnosis of gallstones. Radiology 110(3):643–647

    Article  Google Scholar 

  8. Du JF, Xiao P, Wu JS et al (2012) Design of isotropic orthogonal transform algorithm-based multicarrier systems with blind channel estimation. IET Commun 6(16):2695–2704

    Article  Google Scholar 

  9. Fatemi M, Kak AC (1980) Ultrasonic B-scan imaging: Theory of image formation and a technique for restoration. Ultrason Imaging 2(1):1–47

    Article  Google Scholar 

  10. Glorot X, Bengio Y (2010) Understanding the difficulty of training deep feedforward neural networks." Proceedings of the thirteenth international conference on artificial intelligence and statistics

  11. Glorot X, Bordes A, Bengio Y (2011) Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics

  12. Gong, Y et al (2014) Multi-scale orderless pooling of deep convolutional activation features. European Conference on Computer Vision. Springer, Cham

  13. Hecht-Nielsen R (1992) Theory of the backpropagation neural network. Neural networks for perception 65–93

  14. Hoskins PR, Martin K, Thrush A (2010) Diagnostic ultrasound: physics and equipment. Cambridge University Press, Cambridge

    Book  Google Scholar 

  15. Howard AG, et al (2017) Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861

  16. Hu J, Shen L, Sun G (2017) Squeeze-and-excitation networks. arXiv preprint arXiv:1709.01507

  17. Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks,” Advances in Neural Information Processing Systems 1097–1105

  18. LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521(7553):436–444

    Article  Google Scholar 

  19. Luo QL, Fang W, Wu JS et al (2012) Reliable broadband wireless communication for high speed trains using baseband cloud. EURASIP J Wirel Commun Netw 2012:1–12

    Article  Google Scholar 

  20. Maršál K et al (1984) Blood flow in the fetal descending aorta; intrinsic factors affecting fetal blood flow, ie fetal breathing movements and cardiac arrhythmia. Ultrasound Med Biol 10(3):339–348

    Article  Google Scholar 

  21. Peng JS, Shao YM (2018) Intelligent method for identifying driving risk based on V2V multisource big data. Complexity 2018:1–9

    MATH  Google Scholar 

  22. Petchesky RP (1987) Fetal images: The power of visual culture in the politics of reproduction. Fem Stud 13(2):263–292

    Article  Google Scholar 

  23. Russakovsky O et al (2015) Imagenet large scale visual recognition challenge. Int J Comput Vis 115(3):211–252

    Article  MathSciNet  Google Scholar 

  24. Schalkoff RJ (1997) Artificial neural networks, Vol 1. McGraw-Hill, New York

    MATH  Google Scholar 

  25. Shin H-C et al (2016) Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning. IEEE Trans Med Imaging 35(5):1285–1298

    Article  Google Scholar 

  26. Sibi P, Allwyn Jones S, Siddarth P (2013) Analysis of different activation functions using back propagation neural networks. Journal of Theoretical and Applied Information Technology 47(3):1264–1268

    Google Scholar 

  27. Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556

  28. Srivastava N et al (2014) Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research 15(1):1929–1958

    MathSciNet  MATH  Google Scholar 

  29. Sun YG, Qiang HY, Mei X et al (2017) Modified repetitive learning control with unidirectional control input for uncertain nonlinear systems. Neural Comput & Applic. https://doi.org/10.1007/s00521-017-2983-y

  30. Sun YG, Qiang HY, Xu JQ, Dong DS (2017) The nonlinear dynamics and anti-sway tracking control for offshore container crane on a mobile harbor. Journal of Marine Science and Technology-Taiwan 25(6):656–665

    Google Scholar 

  31. Szegedy C et al (2015) Going deeper with convolutions. Cvpr

  32. Yang K, Yang J, Wu JS et al (2014) Performance analysis of DF cooperative diversity system with OSTBC over spatially correlated Nakagami-m fading channels. IEEE Trans Veh Technol 63(3):1270–1281

    Article  Google Scholar 

  33. Yang K, Martin S, Xing CW et al (2016) Energy-Efficient Power Control for Device-to-Device Communications. IEEE Journal on Selected Areas in Communications 34(12):3208–3220

    Article  Google Scholar 

  34. Yang A, Han Y, Pan Y et al (2017) Optimum surface roughness prediction for titanium alloy by adopting response surface methodology. Results in Physics 7:1046–1050

    Article  Google Scholar 

Download references

Acknowledgments

This work was supported by the GRRC program of Gyeonggi province. [GRRC-Gachon2017(B03), Development of Personalized Digital Support Technology based on Artificial Intelligence].

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Taeg Keun Whangbo.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps andinstitutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, D., Tian, J. & Whangbo, T.K. Method for real-time automatic setting of ultrasonic image parameters based on deep learning. Multimed Tools Appl 78, 1067–1080 (2019). https://doi.org/10.1007/s11042-018-6365-y

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-018-6365-y

Keywords