skip to main content
10.1145/3655532.3655533acmotherconferencesArticle/Chapter ViewAbstractPublication PagesicrsaConference Proceedingsconference-collections
research-article

The Kinect-based robot environment perception system

Published: 28 June 2024 Publication History

Abstract

The Kinect-based robot environment perception system has been widely used due to its low cost. However, its shortcoming is the problem of low perception accuracy. Our paper is mainly devoted to improving the accuracy of Kinect-based robot environmental perception system. Firstly, Our paper utilizes the robustness of <Formula format="inline"><TexMath><?TeX ${{\boldsymbol{L}}}_2{\boldsymbol{E}}$ ?></TexMath><File name="a00--inline1" type="gif"/></Formula> estimator to effectively improve the registration accuracy between depth image and color image. Secondly, the system used the residual fusion of super-resolution reconstruction method to improve the quality of color and depth images. Extensive experiments based on real scene datasets showed that our method has high accuracy in quantitative and qualitative evaluation.

References

[1]
The Editorial Committee of the "China Artificial Intelligence Series White Paper". China Intelligent Robotics White Paper. 2015.
[2]
Endres F, Hess J, Sturm J, 3-D Mapping With an RGB-D Camera. IEEE Transactions on Robotics, 2017, 30(1): 177-187.
[3]
Puliti S, Assessing 3D point clouds from aerial photographs for species-specific forest inventories. Scandinavian Journal of Forest Research, 2017, 32(1): 68-79.
[4]
Goodbody T R H, Updating residual stem volume estimates using ALS- and UAV-acquired stereo-photogrammetric point clouds. International Journal of Remote Sensing, 2017, 38(8-10): 2938-2953.
[5]
Hou B, Khanal B, 3D Reconstruction in Canonical Co-ordinate Space from Arbitrarily Oriented 2D Images. IEEE Transactions on Medical Imaging, 2018, PP(99): 1-1.
[6]
Delmerico J, Isler S, Sabzevari R, A comparison of volumetric information gain metrics for active 3D object reconstruction. Autonomous Robots, 2018(4): 1-12.
[7]
Asif U, Bennamoun M, Sohel F A. RGB-D Object Recognition and Grasp Detection Using Hierarchical Cascaded Forests. IEEE Transactions on Robotics, 2017, 33(3): 547-564.
[8]
Smith R, Self M, Cheeseman P. A stochastic map for uncertain spatial relationships. International Symposium on Robotics Research, MIT Press. 1988: 467-474.
[9]
Henry P, RGB-D mapping: Using Kinect-style depth cameras for dense 3D modeling of indoor environments. International Journal of Robotics Research, 2013, 31(5): 647-663.
[10]
Lenz I, Lee H, Saxena A. depth Learning for Detecting Robotic Grasps. International Journal of Robotics Research, 2014, 34(4-5): 705-724.
[11]
Wu R. Study on Control System for Mobile Manipulators Opening Doors Based on Visual Information and Azimuth Angle. PhD Thesis, Wuhan: Wuhan University of Science and Technology, 2014.
[12]
Lowe D G. Distinctive Image Features from Scale-Invariant Keypoints. In: Proceedings of the International Journal of Computer Vision, Netherlands. 2004: 91-110.
[13]
Yang C, Zhou H, Sun S, Good match exploration for infrared face recognition. Infrared Physics & Technology, 2014, 67: 111-115.
[14]
Fischler M A, Bolles R C. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Readings in Computer Vision, 1987, 24(6): 726-740.
[15]
Ma J, Zhao J, Tian J, Robust Estimation of Nonrigid Transformation for Point Set Registration. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Portland, OR, USA. 2013: 2147-2154.
[16]
Daniel H C K J,et al. Joint Depth and Color Camera Calibration with Distortion Correction. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2012, 34(10): 2058-2064.
[17]
Wang Z, Liu D, Yang J, depth Networks for Image Super-Resolution with Sparse Prior. In: Proceedings of 2015 IEEE International Conference on Computer Vision.Beijing. 2015: 370-378.
[18]
Hui T W, Depth Map Super-Resolution by depth Multi-Scale Guidance. In: Proceedings of the European Conference on Computer Vision, Amsterdam, Netherlands. 2016: 353-369.
[19]
Kim J, Accurate Image Super-Resolution Using Very depth Convolutional Networks. In: Proceedings of the IEEE Computer Vision and Pattern Recognition,USA. 2016: 1646-1654.
[20]
Li ZL, Li XH, Ma LL, Research on Clarity Evaluation Method for Images without Reference. Remote Sensing Technology and Application, 2011, 26(2): 239-246

Index Terms

  1. The Kinect-based robot environment perception system

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    ICRSA '23: Proceedings of the 2023 6th International Conference on Robot Systems and Applications
    September 2023
    335 pages
    ISBN:9798400708039
    DOI:10.1145/3655532
    This work is licensed under a Creative Commons Attribution International 4.0 License.

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 28 June 2024

    Check for updates

    Author Tags

    1. super-resolution reconstruction
    2. the indoor robot environment perception framework, image registration

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Conference

    ICRSA 2023

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 14
      Total Downloads
    • Downloads (Last 12 months)14
    • Downloads (Last 6 weeks)2
    Reflects downloads up to 20 Feb 2025

    Other Metrics

    Citations

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media