Skip to main content
Log in

Real-time kinematic analysis of beam pumping unit: a deep learning approach

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

A kinematics analysis model for the beam pumping unit (BPU) based on deep learning is proposed in this study. It takes the real-time BPU video stream as the input, and outputs the sequence of kinematic parameters. The model is composed of two parts. The first is the motion detection model based on modified Yolov4, which can output the components classes and positions of the targeted BPU in the input video. Experimental results on the test dataset reveal that the average precision of the modified model is superior to original Yolov4 and other methods. The second part is the BPU kinematics analysis model based on motion detection, which can analyze and output the kinematics parameters of BPU in real time according to the input motion detection results. Finally, we deployed this model on mobile inspection robot and run in oilfield. The experiment results show that the approach proposed in this study can help achieve accurate kinematics analysis on targeted BPU in real time. In general, this research provides a novel approach for BPU kinematics analysis, which is a very important part of the intelligent oilfield construction.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17

Similar content being viewed by others

References

  1. Sun W, Cao L, Qing T, Tan Y, Ieee (2015) The design and simulation of beam pumping unit. In: 2015 21st international conference on automation and computing, pp 295–298

  2. Gray H (1963) Kinematics of oil-well pumping units. In: Drilling and production practice. OnePetro

    Google Scholar 

  3. Svinos JG (1983) Exact kinematic analysis of pumping units. https://doi.org/10.2118/12201-MS

  4. Qi J, Guo F, Huang W, Sun Y (2006) Exact analysis on beam pumping unit. Shiyou Xuebao/Acta Petrolei Sinica 27(6):116–119+124

  5. Bhagavatula R, Fashesan OA, Heinze LR, Lea JF (2007) A computational method for planar kinematic analysis of beam pumping units. J Energy Resour Technol Trans ASME 129(4):300–306. https://doi.org/10.1115/1.2790981

    Article  Google Scholar 

  6. Niu W (2011) The research on modular and parametric design system for beam pumping unit. In: Luo Q, Zeng D (eds) Information technology for manufacturing systems Ii, Pts 1-3, Applied mechanics and materials, vol 58-60, pp 215–220, https://doi.org/10.4028/www.scientific.net/AMM.58-60.215

  7. Feng ZM, Guo C, Zhang D, Cui W, Tan C, Xu X, Zhang Y (2020) Variable speed drive optimization model and analysis of comprehensive performance of beam pumping unit. J Pet Sci Eng. https://doi.org/10.1016/j.petrol.2020.107155

    Article  Google Scholar 

  8. Huang J, Wang Y, Dang X (2013) Kinematics analysis and simulation of main components of beam pumping unit based on matlab. In: Liu X, Zhang K, Li M (eds) Advances in manufacturing science and engineering, Pts 1-4, Advanced materials research, vol 712–715, pp 1420–1423, https://doi.org/10.4028/www.scientific.net/AMR.712-715.1420

  9. Feng ZM, Tan JJ, Sun YN, Zhang DS, Duan WB (2018) 3d-dynamic modelling and performance analysis of service behavior for beam pumping unit. Math Probl Eng 2018:7. https://doi.org/10.1155/2018/9205251

    Article  Google Scholar 

  10. Ijjina EP, Chalavadi KM (2017) Human action recognition in rgb-d videos using motion sequence information and deep learning. Pattern Recognit 72:504–516. https://doi.org/10.1016/j.patcog.2017.07.013

    Article  Google Scholar 

  11. Cronin NJ, Rantalainen T, Ahtiainen JP, Hynynen E, Waller B (2019) Markerless 2d kinematic analysis of underwater running: a deep learning approach. J Biomech 87:75–82

    Article  Google Scholar 

  12. Gao P, Zhao D, Chen X (2020) Multi-dimensional data modelling of video image action recognition and motion capture in deep learning framework. IET Image Process 14(7):1373–1381. https://doi.org/10.1049/iet-ipr.2019.0588

    Article  Google Scholar 

  13. Gong M, Shu Y (2020) Real-time detection and motion recognition of human moving objects based on deep learning and multi-scale feature fusion in video. IEEE Access 8:25811–25822. https://doi.org/10.1109/ACCESS.2020.2971283

    Article  Google Scholar 

  14. Graves A, Mohamed AR, Hinton G, Ieee (2013) Speech recognition with deep recurrent neural networks. Ieee, New York, pp 6645–6649. In: International conference on acoustics speech and signal processing ICASSP

  15. Krizhevsky A, Sutskever I, Hinton GE (2017) Imagenet classification with deep convolutional neural networks. Commun ACM 60(6):84–90. https://doi.org/10.1145/3065386

    Article  Google Scholar 

  16. Zeiler MD, Fergus R (2014) Visualizing and understanding convolutional networks, Lecture Notes in Computer Science, vol 8689, Springer International Publishing Ag, Cham, pp 818–833

  17. Zhang H, Cisse M, Dauphin YN, Lopez-Paz D (2017) Mixup: beyond empirical risk minimization. https://ui.adsabs.harvard.edu/abs/2017arXiv171009412Z

  18. Bochkovskiy A, Wang CY, Liao HY (2020) Yolov4: optimal speed and accuracy of object detection. https://ui.adsabs.harvard.edu/abs/2020arXiv200410934B

  19. He K, Zhang X, Ren S, Sun J (2015) Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans Pattern Anal Mach Intell 37(9):1904–1916. https://doi.org/10.1109/TPAMI.2015.2389824

    Article  Google Scholar 

  20. Liu S, Qi L, Qin H, Shi J, Jia J, Ieee (2018) Path aggregation network for instance segmentation. In: 2018 Ieee/Cvf conference on computer vision and pattern recognition, IEEE conference on computer vision and pattern recognition, pp 8759–8768, https://doi.org/10.1109/cvpr.2018.00913

  21. Odena A, Dumoulin V, Olah C (2016) Deconvolution and checkerboard artifacts. Distill 1(10):e3

  22. Redmon JFA (2018) Yolov3: an incremental improvement. https://ui.adsabs.harvard.edu/abs/2018arXiv180402767R

  23. Yu F, Koltun V (2015) Multi-scale context aggregation by dilated convolutions. https://ui.adsabs.harvard.edu/abs/2015arXiv151107122Y

  24. Hamaguchi R, Fujita A, Nemoto K, Imaizumi T, Hikosaka S (2017) Effective use of dilated convolutions for segmenting small object instances in remote sensing imagery. https://ui.adsabs.harvard.edu/abs/2017arXiv170900179H

  25. Iandola F, Moskewicz M, Karayev S, Girshick R, Darrell T, Keutzer K (2014) Densenet: implementing efficient convnet descriptor pyramids. https://ui.adsabs.harvard.edu/abs/2014arXiv1404.1869I

  26. Woo S, Park J, Lee JY, Kweon IS (2018) Cbam: convolutional block attention module. https://ui.adsabs.harvard.edu/abs/2018arXiv180706521W

  27. Li C, Yang Y, Feng M, Chakradhar S, Zhou H (2016) Optimizing memory efficiency for deep convolutional neural networks on GPUS. https://ui.adsabs.harvard.edu/abs/2016arXiv161003618L

  28. Foster DE, Pennock GR (2010) A study of the instantaneous centers of velocity for two-degree-of-freedom planar linkages. Mech Mach Theory 45(4):641–657. https://doi.org/10.1016/j.mechmachtheory.2009.11.008, https://www.sciencedirect.com/science/article/pii/S0094114X09002225

    Article  MATH  Google Scholar 

  29. Zheng Z, Wang P, Liu W, Li J, Ye R, Ren D (2019) Distance-iou loss: faster and better learning for bounding box regression. https://ui.adsabs.harvard.edu/abs/2019arXiv191108287Z

  30. Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D, Batra D (2016) Grad-cam: visual explanations from deep networks via gradient-based localization. https://ui.adsabs.harvard.edu/abs/2016arXiv161002391S

  31. Carion N, Massa F, Synnaeve G, Usunier N, Kirillov A, Zagoruyko S (2020) End-to-end object detection with transformers. In: European conference on computer vision. Springer, pp 213–229

  32. Lee SJ, Lee S, Cho SI, Kang SJ (2020) Object detection-based video retargeting with spatial-temporal consistency. IEEE Trans Circuits Syst for Video Technol 30(12):4434–4439. https://doi.org/10.1109/TCSVT.2020.2981652

    Article  Google Scholar 

  33. Pan Y, Pi D, Chen J, Meng H (2021) Fdppgan: remote sensing image fusion based on deep perceptual patchgan. Neural Comput Appl. https://doi.org/10.1007/s00521-021-05724-1

    Article  Google Scholar 

  34. Padilla R, Netto SL, Silva EABd (2020) A survey on performance metrics for object-detection algorithms. In: 2020 international conference on systems, signals and image processing (IWSSIP), pp 237–242, https://doi.org/10.1109/IWSSIP48289.2020.9145130

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhiqing Huang.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sun, J., Huang, Z., Zhu, Y. et al. Real-time kinematic analysis of beam pumping unit: a deep learning approach. Neural Comput & Applic 34, 7157–7171 (2022). https://doi.org/10.1007/s00521-021-06783-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-021-06783-0

Keywords

Navigation