ABSTRACT
To autonomously detect the penetration point in the working area of trench excavation, a feature detection method of penetration point based on binocular cameras was proposed. First, the homogeneous coordinate transformation is established, which can convert the 3D point cloud of the excavation area from the camera coordinate system to the excavator global base coordinate system. Then, the global gradient consistency function is designed to describe the geometric feature of the penetration point of a trench, and the position coordinates of the penetration point are detected. Finally, the test of the penetration point detection of the excavation area is conducted. Within the range of the excavation operation, the maximum position error of the penetration point detection is less than 80 mm, and the average detection error is 46.2 mm, which proves that this method can effectively detect the penetration point.
- Suman Paneru and Idris Jeelani. 2021. Computer vision applications in construction: Current state, opportunities & challenges. Automat. Constr. 132 (December 2021), 103940. https://doi.org/10.1016/j.autcon.2021.103940Google ScholarCross Ref
- Sanjiv Singh. 1995. Synthesis of Tactical Plans for Robotic Excavation, Ph.D. Thesis, Robotics Institute, Carnegie Mellon University.Google Scholar
- Junhao Zou and Hyoungkwan Kim. 2007. Using hue, saturation, and value color space for hydraulic excavator idle time analysis. J. Comput. Civil Eng. 21, 4 (July 2007), 238-246. https://doi.org/10.1061/(ASCE)0887-3801(2007)21:4(238)Google ScholarCross Ref
- Ehsan R. Azar and Brenda McCabe. 2012. Part based model and spatial–temporal reasoning to recognize hydraulic excavators in construction images and videos. Automat. Constr. 24 (July 2012), 194-202. https://doi.org/10.1016/j.autcon.2012.03.003Google ScholarCross Ref
- Jinwoo Kim and Seokho Chi. 2019. Action recognition of earthmoving excavators based on sequential pattern analysis of visual features and operation cycles. Automat. Constr. 104 (August 2019), 255-264. https://doi.org/10.1016/j.autcon.2019.03.025Google ScholarCross Ref
- Chenxi Yuan, Shuai Li, and Hubo Cai. 2017. Vision-based excavator detection and tracking using hybrid kinematic shapes and key nodes. J. Comput. Civil Eng. 31, 1 (January 2017), 04016038. https://doi.org/10.1061/(ASCE)CP.1943-5487.0000602Google ScholarCross Ref
- Mohammad M. Soltani, Zhenhua Zhu, and Amin Hammad. 2017. Skeleton estimation of excavator by detecting its parts. Automat. Constr. 82 (October 2017), 1-15. https://doi.org/10.1016/j.autcon.2017.06.023Google ScholarCross Ref
- Jiaqi Xu and Hwan-Sik Yoon. 2019. Vision-based estimation of excavator manipulator pose for automated grading control. Automat. Constr. 98 (February 2019), 122-131. https://doi.org/10.1016/j.autcon.2018.11.022Google ScholarCross Ref
- Han Luo, Mingzhu Wang, Peter K.-Y. Wong, and Jack C.P. Cheng. 2020. Full body pose estimation of construction equipment using computer vision and deep learning techniques. Automat. Constr. 110 (February 2020), 103016. https://doi.org/10.1016/j.autcon.2019.103016Google ScholarCross Ref
- Chen Feng, Vineet R. Kamat, and Hubo Cai. 2018. Camera marker networks for articulated machine pose estimation. Automat. Constr. 96 (December 2018), 148-160. https://doi.org/10.1016/j.autcon.2018.09.004Google ScholarCross Ref
- Jiangying Zhao, Yongbiao Hu, and Mingrui Tian. 2021. Pose estimation of excavator manipulator based on monocular vision marker system. Sensors. 21, 13 (June 2021), 4478. https://doi.org/10.3390/s21134478Google ScholarCross Ref
- Jin G. Lee, Jeongbin Hwang, Seokho Chi, and JoonOh Seo. 2022. Synthetic image dataset development for vision-based construction equipment detection. J. Comput. Civil Eng. 36, 5 (September 2022), 04022020. https://doi.org/10.1061/(ASCE)CP.1943-5487.0001035Google ScholarCross Ref
- ZongWei Yao, Qiuping Huang, Ze Ji, XueFei Li, and Qiushi Bi. 2021. Deep learning-based prediction of piled-up status and payload distribution of bulk material. Automat. Constr. 121 (January 2021), 103424. https://doi.org/10.1016/j.autcon.2020.103424Google ScholarCross Ref
- Yunhao Cui, Yi An, Wei Sun, Huosheng Hu, and Xueguan Song. 2022. Memory-augmented point cloud registration network for bucket pose estimation of the intelligent mining excavator. IEEE T. INSTRUM. MEAS. 71 (February 2022), 1-12. https://doi.org/10.1109/TIM.2022.3149331Google ScholarCross Ref
- Tianrui Guan, Zhenpeng He, Ruitao Song, Dinesh Manocha, and Liangjun Zhang. 2022. Tns: Terrain traversability mapping and navigation system for autonomous excavators. In Proceedings of the 18th Robotics: Science and Systems (RSS 2022), New York City, NY, USA. https://doi.org/10.15607/RSS.2022.XVIII.049Google ScholarCross Ref
- Adrian Kaehler and Gary Bradski. 2016. Learning OpenCV 3: Computer Vision in C++ with the OpenCV Library. Sebastopol: O'Reilly Media, Inc.Google Scholar
- Stereolabs. ZED 2i Technical Specifications. Retrieved March 21, 2022 from https://www.stereolabs.com/zed-2i.Google Scholar
- Richard Hartley and Andrew Zisserman. 2003. Multiple View Geometry in Computer Vision. Cambridge university press.Google Scholar
Index Terms
- Penetration Point Detection for Autonomous Trench Excavation Based on Binocular Vision
Recommendations
Obstacle detection of indoor mobile robot based on binocular vision
ICCIR '22: Proceedings of the 2022 2nd International Conference on Control and Intelligent RoboticsAiming at the uncertainty of obstacles in indoor mobile robot operation environment, this paper designs a binocular vision obstacle detection system that does not depend on the a priori knowledge of obstacles and background. Specifically, we build a ...
Pedestrian detection aided by fusion of binocular information
In this paper, a pedestrian detection framework aided by the fusion of information between binocular vision is proposed. In this framework, we follow the intuition that a pedestrian has consistent appearance when observed from different viewpoints. A ...
Binocular vision-based robot control with active hand-eye coordination
ECAL'07: Proceedings of the 9th European conference on Advances in artificial lifeTraditional eye-in-hand robotic systems are capable of performing versatile manipulation, but generally they can only observe a restricted workspace. As regards eye-to-hand configurations, tasks can be controlled within the field of view of the vision ...
Comments