Skip to main content
Log in

Visual perception system design for rock breaking robot based on multi-sensor fusion

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

In recent years, mining automation has received significant attention as a critical focus area. Rock breaking robots are commonly used equipment in the mining industry, and their automation requires an accurate and fast visual perception system. Currently, rock detection and determination of rock breaking surfaces heavily rely on operator experience. To address this, this paper leverages multi-sensor fusion techniques, specifically camera and lidar fusion, as the perception system for the rock breaking robot. The advanced PP-YOLO series algorithm is employed for 2D detection, enabling the generation of specific detection results based on the breaking requirements. Furthermore, 3D reconstruction of rocks detected in the 2D area is performed using point cloud data. The extraction of rock breaking surfaces is achieved through point cloud segmentation and statistical filtering methods. Experimental results demonstrate a rock detection speed of 13.8 ms and the mAP value of 91.2%. The segmentation accuracy for rock breaking surfaces is 75.46%, with an average recall of 91.08%. The segmentation process takes 73.09 ms, thus meeting the real-time detection and segmentation needs within the specified rock breaking range. This study effectively addresses the limitations associated with single sensor information.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18

Similar content being viewed by others

References

  1. Benet B, Lenain R, Rousseau V (2017) Development of a sensor fusion method for crop row tracking operations. Adv Anim Biosci 8:583–589. https://doi.org/10.1017/S2040470017000310

    Article  Google Scholar 

  2. Bigdeli B, Pahlavani P (2016) High resolution multisensor fusion of SAR, optical and LiDAR data based on crisp vs. fuzzy and feature vs. decision ensemble systems. Int J Appl Earth Obs Geoinf 52:126–136. https://doi.org/10.1016/J.JAG.2016.06.008

    Article  Google Scholar 

  3. Bochkovskiy A, Wang C-Y, Liao H-YM (2020) YOLOv4: Optimal Speed and Accuracy of Object Detection. arXiv e-prints arXiv:2004.10934. https://doi.org/10.48550/arXiv.2004.10934

  4. Bonchis A, Hillier N, Ryde J et al (2011) Experiments in Autonomous Earth Moving. IFAC Proceedings Volumes 44:11588–11593. https://doi.org/10.3182/20110828-6-IT-1002.00536

    Article  Google Scholar 

  5. Bozkurt F (2022) A deep and handcrafted features-based framework for diagnosis of COVID-19 from chest x-ray images. Concurr Comput 34:e6725. https://doi.org/10.1002/cpe.6725

    Article  PubMed  Google Scholar 

  6. Bureau of Labor Statistics (2015) Census of Fatal Occupational Injuries (CFOI) – current and revised data. http://www.bls.gov/iif/oshcfoi1.htm. Accessed 8 May 2023

  7. Dai J, Qi H, Xiong Y, et al (2017) Deformable Convolutional Networks. In: 2017 IEEE International Conference on Computer Vision (ICCV). pp 764–773. https://doi.org/10.1109/ICCV.2017.89

  8. Di K, Yue Z, Liu Z, Wang S (2013) Automated rock detection and shape analysis from mars rover imagery and 3D point cloud data. J Earth Sci 24:125–135. https://doi.org/10.1007/s12583-013-0316-3

    Article  Google Scholar 

  9. Eraliev OMU, Lee KH, Shin DY, Lee CH (2022) Sensing, perception, decision, planning and action of autonomous excavators. Autom Constr 141:104428. https://doi.org/10.1016/J.AUTCON.2022.104428

    Article  Google Scholar 

  10. Fischler MA, Bolles RC (1987) Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography. Readings in Computer Vision 726–740. https://doi.org/10.1016/B978-0-08-051581-6.50070-2

  11. Ghiasi G, Lin T-Y, Le Q V (2018) DropBlock: A regularization method for convolutional networks. arXiv e-prints arXiv:1810.12890. https://doi.org/10.48550/arXiv.1810.12890

  12. Gupta S, Snigdh I (2022) Multi-sensor fusion in autonomous heavy vehicles. Autonomous and Connected Heavy Vehicle Technology 375–389. https://doi.org/10.1016/B978-0-323-90592-3.00021-5

  13. He K, Zhang X, Ren S, Sun J (2015) Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition. IEEE Trans Pattern Anal Mach Intell 37:1904–1916. https://doi.org/10.1109/TPAMI.2015.2389824

    Article  PubMed  Google Scholar 

  14. He T, Zhang Z, Zhang H, et al (2019) Bag of Tricks for Image Classification with Convolutional Neural Networks. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp 558–567. https://doi.org/10.1109/CVPR.2019.00065

  15. Howard A, Sandler M, Chen B, et al (2019) Searching for MobileNetV3. In: 2019 IEEE/CVF International Conference on Computer Vision (ICCV). pp 1314–1324. https://doi.org/10.1109/ICCV.2019.00140

  16. Huang M, Liu Y, Yang Y (2022) Edge detection of ore and rock on the surface of explosion pile based on improved Canny operator. Alex Eng J 61:10769–10777. https://doi.org/10.1016/J.AEJ.2022.04.019

    Article  Google Scholar 

  17. Huang X, Wang X, Lv W, et al (2021) PP-YOLOv2: A Practical Object Detector. arXiv e-prints arXiv:2104.10419. https://doi.org/10.48550/arXiv.2104.10419

  18. Hurkxkens I, Mirjan A, Gramazio F, et al (2020) Robotic Landscapes: Designing Formation Processes for Large Scale Autonomous Earth Moving. In: Impact: Design With All Senses. pp 69–81. https://doi.org/10.1007/978-3-030-29829-6_6

  19. Lampinen S, Mattila J (2021) Robust Rock Detection and Clustering with Surface Analysis for Robotic Rock Breaking Systems. In: 2021 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM). pp 140–147. https://doi.org/10.1109/AIM46487.2021.9517695

  20. Lampinen S, Niu L, Hulttinen L et al (2021) Autonomous robotic rock breaking using a real-time 3D visual perception system. J Field Robot 38:980–1006. https://doi.org/10.1002/rob.22022

    Article  Google Scholar 

  21. Liang CJ, Lundeen KM, McGee W et al (2019) A vision-based marker-less pose estimation system for articulated construction robots. Autom Constr 104:80–94. https://doi.org/10.1016/J.AUTCON.2019.04.004

    Article  Google Scholar 

  22. Liu X, Wang H, Jing H et al (2020) Research on Intelligent Identification of Rock Types Based on Faster R-CNN Method. IEEE Access 8:21804–21812. https://doi.org/10.1109/ACCESS.2020.2968515

    Article  Google Scholar 

  23. Loncomilla P, Samtani P, Ruiz-del-Solar J (2022) Detecting rocks in challenging mining environments using convolutional neural networks and ellipses as an alternative to bounding boxes. Expert Syst Appl 194:116537. https://doi.org/10.1016/j.eswa.2022.116537

    Article  Google Scholar 

  24. Long X, Deng K, Wang G, et al (2020) PP-YOLO: An Effective and Efficient Implementation of Object Detector. arXiv e-prints arXiv:2007.12099. https://doi.org/10.48550/arXiv.2007.12099

  25. Maleki-Moghaddam M, Yahyaei M, Banisi S (2013) A method to predict shape and trajectory of charge in industrial mills. Miner Eng 46–47:157–166. https://doi.org/10.1016/J.MINENG.2013.04.013

    Article  Google Scholar 

  26. McKinnon C, Marshall JA (2014) Automatic Identification of Large Fragments in a Pile of Broken Rock Using a Time-of-Flight Camera. IEEE Trans Autom Sci Eng 11:935–942. https://doi.org/10.1109/TASE.2014.2308011

    Article  Google Scholar 

  27. Misra D (2019) Mish: A Self Regularized Non-Monotonic Activation Function. arXiv e-prints arXiv:1908.08681. https://doi.org/10.48550/arXiv.1908.08681

  28. Niu L, Chen K, Jia K, Mattila J (2019) Efficient 3D Visual Perception for Robotic Rock Breaking. In: 2019 IEEE 15th International Conference on Automation Science and Engineering (CASE). pp 1124–1130. https://doi.org/10.1109/COASE.2019.8842859

  29. Redmon J, Divvala S, Girshick R, Farhadi A (2015) You Only Look Once: Unified, Real-Time Object Detection. arXiv e-prints arXiv:1506.02640. https://doi.org/10.48550/arXiv.1703.06211

  30. Redmon J, Farhadi A (2018) YOLOv3: An Incremental Improvement. arXiv e-prints arXiv:1804.02767. https://doi.org/10.48550/arXiv.1804.02767

  31. Rezazadeh Azar E, McCabe B (2012) Part based model and spatial–temporal reasoning to recognize hydraulic excavators in construction images and videos. Autom Constr 24:194–202. https://doi.org/10.1016/J.AUTCON.2012.03.003

    Article  Google Scholar 

  32. State Council Information Office of the People’s Republic of China (2003) The Mineral Resources’ Policy In China . http://www.gov.cn/zhengce/2005-05/27/content_2615726.htm. Accessed 8 May 2023

  33. Xiao X, Cui H, Yao M, Tian Y (2017) Autonomous rock detection on mars through region contrast. Adv Space Res 60:626–635. https://doi.org/10.1016/J.ASR.2017.04.028

    Article  ADS  CAS  Google Scholar 

  34. Yuan C, Liu X, Hong X, Zhang F (2021) Pixel-Level Extrinsic Self Calibration of High Resolution LiDAR and Camera in Targetless Environments. IEEE Robot Autom Lett 6:7517–7524. https://doi.org/10.1109/LRA.2021.3098923

    Article  Google Scholar 

  35. Zhang H, Cisse M, Dauphin YN, Lopez-Paz D (2017) mixup: Beyond Empirical Risk Minimization. arXiv e-prints arXiv:1710.09412. https://doi.org/10.48550/arXiv.1710.09412

  36. Zhang L, Zhao J, Long P et al (2021) An autonomous excavator system for material loading tasks. Sci Robot 6:abc3164. https://doi.org/10.1126/scirobotics.abc3164

    Article  Google Scholar 

Download references

Acknowledgements

This research was supported by the National Natural Science Foundation of China (Grant Nos. 51875094) and the Fundamental Research Funds for the Central Universities (Grant Nos.2020GFYD023).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yu Liu.

Ethics declarations

Competing interests

The authors have no relevant financial or non-financial interests to disclose.

Additional information

Publisher's note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, J., Liu, Y., Wang, S. et al. Visual perception system design for rock breaking robot based on multi-sensor fusion. Multimed Tools Appl 83, 24795–24814 (2024). https://doi.org/10.1007/s11042-023-16189-w

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-023-16189-w

Keywords

Navigation