Abstract
A kinematics analysis model for the beam pumping unit (BPU) based on deep learning is proposed in this study. It takes the real-time BPU video stream as the input, and outputs the sequence of kinematic parameters. The model is composed of two parts. The first is the motion detection model based on modified Yolov4, which can output the components classes and positions of the targeted BPU in the input video. Experimental results on the test dataset reveal that the average precision of the modified model is superior to original Yolov4 and other methods. The second part is the BPU kinematics analysis model based on motion detection, which can analyze and output the kinematics parameters of BPU in real time according to the input motion detection results. Finally, we deployed this model on mobile inspection robot and run in oilfield. The experiment results show that the approach proposed in this study can help achieve accurate kinematics analysis on targeted BPU in real time. In general, this research provides a novel approach for BPU kinematics analysis, which is a very important part of the intelligent oilfield construction.
Similar content being viewed by others
References
Sun W, Cao L, Qing T, Tan Y, Ieee (2015) The design and simulation of beam pumping unit. In: 2015 21st international conference on automation and computing, pp 295–298
Gray H (1963) Kinematics of oil-well pumping units. In: Drilling and production practice. OnePetro
Svinos JG (1983) Exact kinematic analysis of pumping units. https://doi.org/10.2118/12201-MS
Qi J, Guo F, Huang W, Sun Y (2006) Exact analysis on beam pumping unit. Shiyou Xuebao/Acta Petrolei Sinica 27(6):116–119+124
Bhagavatula R, Fashesan OA, Heinze LR, Lea JF (2007) A computational method for planar kinematic analysis of beam pumping units. J Energy Resour Technol Trans ASME 129(4):300–306. https://doi.org/10.1115/1.2790981
Niu W (2011) The research on modular and parametric design system for beam pumping unit. In: Luo Q, Zeng D (eds) Information technology for manufacturing systems Ii, Pts 1-3, Applied mechanics and materials, vol 58-60, pp 215–220, https://doi.org/10.4028/www.scientific.net/AMM.58-60.215
Feng ZM, Guo C, Zhang D, Cui W, Tan C, Xu X, Zhang Y (2020) Variable speed drive optimization model and analysis of comprehensive performance of beam pumping unit. J Pet Sci Eng. https://doi.org/10.1016/j.petrol.2020.107155
Huang J, Wang Y, Dang X (2013) Kinematics analysis and simulation of main components of beam pumping unit based on matlab. In: Liu X, Zhang K, Li M (eds) Advances in manufacturing science and engineering, Pts 1-4, Advanced materials research, vol 712–715, pp 1420–1423, https://doi.org/10.4028/www.scientific.net/AMR.712-715.1420
Feng ZM, Tan JJ, Sun YN, Zhang DS, Duan WB (2018) 3d-dynamic modelling and performance analysis of service behavior for beam pumping unit. Math Probl Eng 2018:7. https://doi.org/10.1155/2018/9205251
Ijjina EP, Chalavadi KM (2017) Human action recognition in rgb-d videos using motion sequence information and deep learning. Pattern Recognit 72:504–516. https://doi.org/10.1016/j.patcog.2017.07.013
Cronin NJ, Rantalainen T, Ahtiainen JP, Hynynen E, Waller B (2019) Markerless 2d kinematic analysis of underwater running: a deep learning approach. J Biomech 87:75–82
Gao P, Zhao D, Chen X (2020) Multi-dimensional data modelling of video image action recognition and motion capture in deep learning framework. IET Image Process 14(7):1373–1381. https://doi.org/10.1049/iet-ipr.2019.0588
Gong M, Shu Y (2020) Real-time detection and motion recognition of human moving objects based on deep learning and multi-scale feature fusion in video. IEEE Access 8:25811–25822. https://doi.org/10.1109/ACCESS.2020.2971283
Graves A, Mohamed AR, Hinton G, Ieee (2013) Speech recognition with deep recurrent neural networks. Ieee, New York, pp 6645–6649. In: International conference on acoustics speech and signal processing ICASSP
Krizhevsky A, Sutskever I, Hinton GE (2017) Imagenet classification with deep convolutional neural networks. Commun ACM 60(6):84–90. https://doi.org/10.1145/3065386
Zeiler MD, Fergus R (2014) Visualizing and understanding convolutional networks, Lecture Notes in Computer Science, vol 8689, Springer International Publishing Ag, Cham, pp 818–833
Zhang H, Cisse M, Dauphin YN, Lopez-Paz D (2017) Mixup: beyond empirical risk minimization. https://ui.adsabs.harvard.edu/abs/2017arXiv171009412Z
Bochkovskiy A, Wang CY, Liao HY (2020) Yolov4: optimal speed and accuracy of object detection. https://ui.adsabs.harvard.edu/abs/2020arXiv200410934B
He K, Zhang X, Ren S, Sun J (2015) Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans Pattern Anal Mach Intell 37(9):1904–1916. https://doi.org/10.1109/TPAMI.2015.2389824
Liu S, Qi L, Qin H, Shi J, Jia J, Ieee (2018) Path aggregation network for instance segmentation. In: 2018 Ieee/Cvf conference on computer vision and pattern recognition, IEEE conference on computer vision and pattern recognition, pp 8759–8768, https://doi.org/10.1109/cvpr.2018.00913
Odena A, Dumoulin V, Olah C (2016) Deconvolution and checkerboard artifacts. Distill 1(10):e3
Redmon JFA (2018) Yolov3: an incremental improvement. https://ui.adsabs.harvard.edu/abs/2018arXiv180402767R
Yu F, Koltun V (2015) Multi-scale context aggregation by dilated convolutions. https://ui.adsabs.harvard.edu/abs/2015arXiv151107122Y
Hamaguchi R, Fujita A, Nemoto K, Imaizumi T, Hikosaka S (2017) Effective use of dilated convolutions for segmenting small object instances in remote sensing imagery. https://ui.adsabs.harvard.edu/abs/2017arXiv170900179H
Iandola F, Moskewicz M, Karayev S, Girshick R, Darrell T, Keutzer K (2014) Densenet: implementing efficient convnet descriptor pyramids. https://ui.adsabs.harvard.edu/abs/2014arXiv1404.1869I
Woo S, Park J, Lee JY, Kweon IS (2018) Cbam: convolutional block attention module. https://ui.adsabs.harvard.edu/abs/2018arXiv180706521W
Li C, Yang Y, Feng M, Chakradhar S, Zhou H (2016) Optimizing memory efficiency for deep convolutional neural networks on GPUS. https://ui.adsabs.harvard.edu/abs/2016arXiv161003618L
Foster DE, Pennock GR (2010) A study of the instantaneous centers of velocity for two-degree-of-freedom planar linkages. Mech Mach Theory 45(4):641–657. https://doi.org/10.1016/j.mechmachtheory.2009.11.008, https://www.sciencedirect.com/science/article/pii/S0094114X09002225
Zheng Z, Wang P, Liu W, Li J, Ye R, Ren D (2019) Distance-iou loss: faster and better learning for bounding box regression. https://ui.adsabs.harvard.edu/abs/2019arXiv191108287Z
Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D, Batra D (2016) Grad-cam: visual explanations from deep networks via gradient-based localization. https://ui.adsabs.harvard.edu/abs/2016arXiv161002391S
Carion N, Massa F, Synnaeve G, Usunier N, Kirillov A, Zagoruyko S (2020) End-to-end object detection with transformers. In: European conference on computer vision. Springer, pp 213–229
Lee SJ, Lee S, Cho SI, Kang SJ (2020) Object detection-based video retargeting with spatial-temporal consistency. IEEE Trans Circuits Syst for Video Technol 30(12):4434–4439. https://doi.org/10.1109/TCSVT.2020.2981652
Pan Y, Pi D, Chen J, Meng H (2021) Fdppgan: remote sensing image fusion based on deep perceptual patchgan. Neural Comput Appl. https://doi.org/10.1007/s00521-021-05724-1
Padilla R, Netto SL, Silva EABd (2020) A survey on performance metrics for object-detection algorithms. In: 2020 international conference on systems, signals and image processing (IWSSIP), pp 237–242, https://doi.org/10.1109/IWSSIP48289.2020.9145130
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Sun, J., Huang, Z., Zhu, Y. et al. Real-time kinematic analysis of beam pumping unit: a deep learning approach. Neural Comput & Applic 34, 7157–7171 (2022). https://doi.org/10.1007/s00521-021-06783-0
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00521-021-06783-0