Skip to main content

Autonomous Depth Perception of Humanoid Robot Using Binocular Vision System Through Sensorimotor Interaction with Environment

  • Conference paper
  • First Online:
Neural Information Processing (ICONIP 2015)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 9490))

Included in the following conference series:

Abstract

In this paper, we explore how a humanoid robot having two cameras can learn to improve depth perception by itself. We propose an approach that can autonomously improve depth estimation of the humanoid robot. This approach can tune parameters that are required for binocular vision system of the humanoid robot and improve depth perception automatically through interaction with environment. To set parameters of binocular vision system of the humanoid robot, the robot utilizes sensory invariant driven action (SIDA). The sensory invariant driven action (SIDA) gives identical sensory stimulus to the robot even though actions are not same. These actions are autonomously generated by the humanoid robot without the external control in order to improve depth perception. The humanoid robot can gather training data so as to tune parameters of binocular vision system from the sensory invariant driven action (SIDA). Object size invariance (OSI) is used to examine whether or not current depth estimation is correct. If the current depth estimation is reliable, the robot tunes the parameters of binocular vision system based on object size invariance (OSI) again. The humanoid robot interacts with environment so as to understand a relation between the size of the object and distance to the object from the robot. Our approach shows that action plays an important role in the perception. Experimental results show that the proposed approach can successfully and automatically improve depth estimation of the humanoid robot.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Mugan, J., Kuipers, B.: Autonomous learning of high-level states and actions in continuous environments. IEEE Trans. Auton. Ment. Dev. 4(1), 70–86 (2012)

    Article  Google Scholar 

  2. Kuipers, B.J., Beeson, P., Modayil, J., Provost, J.: Bootstrap learning of foundational representations. Connection Sci. 18(2), 145–158 (2006)

    Article  Google Scholar 

  3. Sokol, S.: Measurement of infant visual acuity from pattern reversal evoked potentials. Vis. Res. 18(1), 33–39 (1978)

    Article  Google Scholar 

  4. Nawrot, E., Mayo, S.L., Nawrot, M.: The development of depth perception from motion parallax in infancy. Percept. Psychophys. 71(1), 194–199 (2009)

    Article  Google Scholar 

  5. Held, R., Hein, A.: Movement-produced stimulation in the development of visually guided behavior. J. Comp. Physiol. Psychol. 56(5), 872 (1963)

    Article  Google Scholar 

  6. Mann, T.A., Park, Y., Jeong, S., Lee, M., Choe, Y.: Autonomous and interactive improvement of binocular visual depth estimation through sensorimotor interaction. IEEE Trans. Auton. Ment. Dev. 5(1), 74–84 (2013)

    Article  Google Scholar 

  7. Mansard, N., Stasse, O., Chaumette, F., Yokoi, K.: Visually-guided grasping while walking on a humanoid robot. In: 2007 IEEE International Conference on Robotics and Automation. IEEE (2007)

    Google Scholar 

  8. McCready, D.: On size, distance, and visual angle perception. Percept. Psychophys. 37(4), 323–334 (1985)

    Article  Google Scholar 

  9. Gilinsky, A.S.: Perceived size and distance in visual space. Psychol. Rev. 58(6), 460 (1951)

    Article  Google Scholar 

  10. Choi, S.-B., Jung, B.-S., Ban, S.-W., Niitsuma, H., Lee, M.: Biologically motivated vergence control system using human-like selective attention model. Neurocomputing. 69(4), 537–558 (2006)

    Article  Google Scholar 

  11. Choe, Y., Yang, H.-F., Eng, D.C.-Y.: Autonomous learning of the semantics of internal sensory states based on motor exploration. Int. J. Humanoid Rob. 4(02), 211–243 (2007)

    Article  Google Scholar 

  12. Whitley, D.: A genetic algorithm tutorial. Stat. Comput. 4(2), 65–85 (1994)

    Article  Google Scholar 

Download references

Acknowledgements

This work was partly supported by the Industrial Strategic Technology Development Program (10044009) funded by the Ministry of Trade, Industry and Energy (MOTIE, Korea) (50 %) and Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT and future Planning (2013R1A2A2A01068687) (50 %).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Minho Lee .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this paper

Cite this paper

Jin, Y., Rammohan, M., Lee, G., Lee, M. (2015). Autonomous Depth Perception of Humanoid Robot Using Binocular Vision System Through Sensorimotor Interaction with Environment. In: Arik, S., Huang, T., Lai, W., Liu, Q. (eds) Neural Information Processing. ICONIP 2015. Lecture Notes in Computer Science(), vol 9490. Springer, Cham. https://doi.org/10.1007/978-3-319-26535-3_63

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-26535-3_63

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-26534-6

  • Online ISBN: 978-3-319-26535-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics