skip to main content
10.1145/3582177.3582191acmotherconferencesArticle/Chapter ViewAbstractPublication PagesipmvConference Proceedingsconference-collections
research-article

Penetration Point Detection for Autonomous Trench Excavation Based on Binocular Vision

Published:31 March 2023Publication History

ABSTRACT

To autonomously detect the penetration point in the working area of trench excavation, a feature detection method of penetration point based on binocular cameras was proposed. First, the homogeneous coordinate transformation is established, which can convert the 3D point cloud of the excavation area from the camera coordinate system to the excavator global base coordinate system. Then, the global gradient consistency function is designed to describe the geometric feature of the penetration point of a trench, and the position coordinates of the penetration point are detected. Finally, the test of the penetration point detection of the excavation area is conducted. Within the range of the excavation operation, the maximum position error of the penetration point detection is less than 80 mm, and the average detection error is 46.2 mm, which proves that this method can effectively detect the penetration point.

References

  1. Suman Paneru and Idris Jeelani. 2021. Computer vision applications in construction: Current state, opportunities & challenges. Automat. Constr. 132 (December 2021), 103940. https://doi.org/10.1016/j.autcon.2021.103940Google ScholarGoogle ScholarCross RefCross Ref
  2. Sanjiv Singh. 1995. Synthesis of Tactical Plans for Robotic Excavation, Ph.D. Thesis, Robotics Institute, Carnegie Mellon University.Google ScholarGoogle Scholar
  3. Junhao Zou and Hyoungkwan Kim. 2007. Using hue, saturation, and value color space for hydraulic excavator idle time analysis. J. Comput. Civil Eng. 21, 4 (July 2007), 238-246. https://doi.org/10.1061/(ASCE)0887-3801(2007)21:4(238)Google ScholarGoogle ScholarCross RefCross Ref
  4. Ehsan R. Azar and Brenda McCabe. 2012. Part based model and spatial–temporal reasoning to recognize hydraulic excavators in construction images and videos. Automat. Constr. 24 (July 2012), 194-202. https://doi.org/10.1016/j.autcon.2012.03.003Google ScholarGoogle ScholarCross RefCross Ref
  5. Jinwoo Kim and Seokho Chi. 2019. Action recognition of earthmoving excavators based on sequential pattern analysis of visual features and operation cycles. Automat. Constr. 104 (August 2019), 255-264. https://doi.org/10.1016/j.autcon.2019.03.025Google ScholarGoogle ScholarCross RefCross Ref
  6. Chenxi Yuan, Shuai Li, and Hubo Cai. 2017. Vision-based excavator detection and tracking using hybrid kinematic shapes and key nodes. J. Comput. Civil Eng. 31, 1 (January 2017), 04016038. https://doi.org/10.1061/(ASCE)CP.1943-5487.0000602Google ScholarGoogle ScholarCross RefCross Ref
  7. Mohammad M. Soltani, Zhenhua Zhu, and Amin Hammad. 2017. Skeleton estimation of excavator by detecting its parts. Automat. Constr. 82 (October 2017), 1-15. https://doi.org/10.1016/j.autcon.2017.06.023Google ScholarGoogle ScholarCross RefCross Ref
  8. Jiaqi Xu and Hwan-Sik Yoon. 2019. Vision-based estimation of excavator manipulator pose for automated grading control. Automat. Constr. 98 (February 2019), 122-131. https://doi.org/10.1016/j.autcon.2018.11.022Google ScholarGoogle ScholarCross RefCross Ref
  9. Han Luo, Mingzhu Wang, Peter K.-Y. Wong, and Jack C.P. Cheng. 2020. Full body pose estimation of construction equipment using computer vision and deep learning techniques. Automat. Constr. 110 (February 2020), 103016. https://doi.org/10.1016/j.autcon.2019.103016Google ScholarGoogle ScholarCross RefCross Ref
  10. Chen Feng, Vineet R. Kamat, and Hubo Cai. 2018. Camera marker networks for articulated machine pose estimation. Automat. Constr. 96 (December 2018), 148-160. https://doi.org/10.1016/j.autcon.2018.09.004Google ScholarGoogle ScholarCross RefCross Ref
  11. Jiangying Zhao, Yongbiao Hu, and Mingrui Tian. 2021. Pose estimation of excavator manipulator based on monocular vision marker system. Sensors. 21, 13 (June 2021), 4478. https://doi.org/10.3390/s21134478Google ScholarGoogle ScholarCross RefCross Ref
  12. Jin G. Lee, Jeongbin Hwang, Seokho Chi, and JoonOh Seo. 2022. Synthetic image dataset development for vision-based construction equipment detection. J. Comput. Civil Eng. 36, 5 (September 2022), 04022020. https://doi.org/10.1061/(ASCE)CP.1943-5487.0001035Google ScholarGoogle ScholarCross RefCross Ref
  13. ZongWei Yao, Qiuping Huang, Ze Ji, XueFei Li, and Qiushi Bi. 2021. Deep learning-based prediction of piled-up status and payload distribution of bulk material. Automat. Constr. 121 (January 2021), 103424. https://doi.org/10.1016/j.autcon.2020.103424Google ScholarGoogle ScholarCross RefCross Ref
  14. Yunhao Cui, Yi An, Wei Sun, Huosheng Hu, and Xueguan Song. 2022. Memory-augmented point cloud registration network for bucket pose estimation of the intelligent mining excavator. IEEE T. INSTRUM. MEAS. 71 (February 2022), 1-12. https://doi.org/10.1109/TIM.2022.3149331Google ScholarGoogle ScholarCross RefCross Ref
  15. Tianrui Guan, Zhenpeng He, Ruitao Song, Dinesh Manocha, and Liangjun Zhang. 2022. Tns: Terrain traversability mapping and navigation system for autonomous excavators. In Proceedings of the 18th Robotics: Science and Systems (RSS 2022), New York City, NY, USA. https://doi.org/10.15607/RSS.2022.XVIII.049Google ScholarGoogle ScholarCross RefCross Ref
  16. Adrian Kaehler and Gary Bradski. 2016. Learning OpenCV 3: Computer Vision in C++ with the OpenCV Library. Sebastopol: O'Reilly Media, Inc.Google ScholarGoogle Scholar
  17. Stereolabs. ZED 2i Technical Specifications. Retrieved March 21, 2022 from https://www.stereolabs.com/zed-2i.Google ScholarGoogle Scholar
  18. Richard Hartley and Andrew Zisserman. 2003. Multiple View Geometry in Computer Vision. Cambridge university press.Google ScholarGoogle Scholar

Index Terms

  1. Penetration Point Detection for Autonomous Trench Excavation Based on Binocular Vision

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Other conferences
      IPMV '23: Proceedings of the 2023 5th International Conference on Image Processing and Machine Vision
      January 2023
      107 pages
      ISBN:9781450397926
      DOI:10.1145/3582177

      Copyright © 2023 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 31 March 2023

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Research
      • Refereed limited
    • Article Metrics

      • Downloads (Last 12 months)14
      • Downloads (Last 6 weeks)2

      Other Metrics

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format .

    View HTML Format