Skip to main content

Intuitive Interaction with Robots – Technical Approaches and Challenges

  • Chapter
  • First Online:
Book cover Formal Modeling and Verification of Cyber-Physical Systems

Abstract

A challenging goal in human-robot interaction research is to build robots that are intuitive interaction partners for humans. Although some research does focus on building robots which look and behave exactly like a human, even simple toylike robots can be accepted as adequate and intuitive interaction partners. However, for complex interaction tasks, intelligent support, or cooperative behavior more advanced and ”on board” solutions have to be developed, that still support natural interaction behavior between human and robot. This chapter will discuss some relevant research in the field of human-robot interaction which is fundamental for more complex but still intuitive interaction. The focus is to convey the complexity of research that is required and to point out different research areas which are relevant to achieve the goal of developing robots that can be natural interaction partners for humans.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Pratt, G. A. and Williamson, M. M. (1995). Series elastic actuators. Proceedings. 1995 IEEE/RSJ International Conference on Intelligent Robots and Systems 95, pages 399–406.

    Google Scholar 

  2. Mittendorfer, P. and Cheng, G. (2011). Humanoid multimodal tactile-sensing modules. Robotics, IEEE Transactions on, 110.

    Google Scholar 

  3. Wettels, N., Fishel, J., and Loeb, G. (2014). Multimodal Tactile Sensor. The Human Hand as an Inspiration for Robot Hand Development, Springer Tracts in Advanced Robotics (STAR) Series, (0912260), 120.

    Google Scholar 

  4. Maiolino, P., Maggiali, M., Cannata, G., Metta, G., and Natale, L. (2013). A Flexible and Robust Large Scale Capacitive Tactile System for Robots, IEEE Sensors Journal, vol. 13, no. 10, pp. 3910–3917.

    Google Scholar 

  5. Grebenstein, M., Albu-Schaffer, A., Bahls, Thomas, Chalon, M., Eiberger, O., Friedl, W., Gruber, R., Haddadin, S., Hagn, U., Haslinger, R., Hoppner, H., Jorg, S., Nickl, M., Nothhelfer, A., Petit, F., Reill, J., Seitz, N., Wimbock, T., Wolf, S., Wusthoff, T., and Hirzinger, G. (2011). The DLR hand arm system. Robotics and Automation (ICRA), 2011, 31753182.

    Google Scholar 

  6. Liu, H., Wu, K., Meusel, P., Seitz, N., Hirzinger, G., Jin, M. H., and Chen, Z. P. (2008). Multisensory five-finger dexterous hand: The DLR/HIT Hand II. 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, 36923697. doi:10.1109/IROS.2008.4650624.

    Google Scholar 

  7. Kawasaki, H., Komatsu, T., and Uchiyama, K. (2002). Dexterous anthropomorphic robot hand with distributed tactile sensor: Gifu hand II. IEEE/ASME Transactions on Mechatronics, 7(3), 296303. doi:10.1109/TMECH.2002.802720.

    Google Scholar 

  8. Cutkosky, M. R., Howe, R. D., and Provancher, W. R. (2007). Handbook of robotics, Chapter 19, Force and tactile sensors. Sensors (Peterborough, NH).

    Google Scholar 

  9. Kampmann, P. and Kirchner, F. (2012). A Tactile Sensing System for Underwater Manipulation. Proceedings of the workshop on: Advances in Tactile Sensing and Touch based Human-Robot Interaction to be held in conjunction with the 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI 2012), Boston, Massachusetts, USA, o.A., 3/2012.

    Google Scholar 

  10. Kampmann, P. and Kirchner, F. (2014). Towards a fine manipulation system with tactile feedback for deep-sea environments. Robotics and Autonomous Systems.

    Google Scholar 

  11. Kapandji, I., Tubiana, R., and Honore, L. (2007). The Physiology of the Joints: The upper limb, The Physiology of the Joints, Churchill Livingstone.

    Google Scholar 

  12. Täubig, H., Frese, U., Hertzberg, C., Lth, C., Mohr, S., Vorobev, E., and Walter, D. (2012). Guaranteeing Functional Safety: Design for Provability and Computer- Aided Verification. In Autonomous Robots, 32 (3), pp. 303331

    Google Scholar 

  13. Kirchner, E. A. and Drechsler, R. (2013). A Formal Model for Embedded Brain Reading. Industrial Robot: An International Journal, 40(6):530–540.

    Google Scholar 

  14. Clarke, Jr, E. M., Grumberg, O., and Peled, D. A. (1999). Model Checking. MIT Press.

    Google Scholar 

  15. Weizenbaum, J. (1966). Eliza – a computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1):36–45.

    Google Scholar 

  16. Weizenbaum, J. (1976). Computer Power and Human Reason: From Judgment to Calculation. W. H. Freeman & Co.: New York, NY , USA.

    Google Scholar 

  17. Wahlster, W. (2000). Mobile Speech-to-Speech Translation of Spontaneous Dialogs: An Overview of the Final Verbmobil System. In Wahlster, W., editor, Verbmobil: Foundations of Speech-to-Speech Translation., pages 3–21. Springer: Berlin, Heidelberg.

    Google Scholar 

  18. Noth, E., Batliner, A., Kieling, A., Kompe, R., and Niemann, H. (2000). Verbmobil: the use of prosody in the linguistic components of a speech understanding system. IEEE Transactions on Speech and Audio Processing, 8(5):519–532.

    Google Scholar 

  19. Herzog, G. and Wazinski, P. (1994). Visual translator: Linking perceptions and natural language descriptions. Artificial Intelligence Review, 8(2–3):175–187.

    Google Scholar 

  20. Dindo, H. and Zambuto, D. (2010). A probabilistic approach to learning a visually grounded language model through human-robot interaction. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2010, pages 790796.

    Google Scholar 

  21. Van den Bergh, M., Carton, D., de Nijs, R., Mitsou, N., Landsiedel, C., Kuhnlenz, K., Wollherr, D., Van Gool, L. J., and Buss, M. (2011). Real-time 3D hand gesture interaction with a robot for understanding directions from humans. In RO-MAN, 2011 IEEE, pages 357–362.

    Google Scholar 

  22. Kim, D., Lee, J., Yoon, H.-S., Kim, J., and Sohn, J. (2013). Vision-based arm gesture recognition for a long-range human-robot interaction. The Journal of Supercomputing, 65(1):336–352.

    Google Scholar 

  23. Ma, B., Xu, W., and Wang, S. (2013). A robot control system based on gesture recognition using kinect. TELKOMNIKA Indonesian Journal of Electrical Engineering, 11(5):2605–2611.

    Google Scholar 

  24. Farwell, L. and Donchin, E. (1988). Talking off the top of your head: toward a mental prosthesis utilizing event-related brain potentials. Electroencephalography and Clinical Neurophysiology, 70(6):510–523.

    Google Scholar 

  25. Wolpaw, J. R., Birbaumer, N., McFarland, D. J., Pfurtscheller, G., and Vaughan, T. M. (2002). Brain-computer interfaces for communication and control. Clinical Neuro- physiology, 113(6):767–791.

    Google Scholar 

  26. Guger, C., Harkam, W., Hertnaes, C., and Pfurtscheller, G. (1999). Prosthetic control by an EEG-based brain-computer interface (BCI). Proceedings of the 5th European Conference for the Advancement of Assistive Technology (AAATE 5th).

    Google Scholar 

  27. Pfurtscheller, G. (2000) Brain oscillations control hand orthosis in a tetraplegic. Neuroscience Letters, 292(3):211–214.

    Google Scholar 

  28. Kubler, A., Kotchoubey, B., Kaiser, J., Wolpaw, J., and Birbaumer, N. (2001). Brain-computer communication: unlocking the locked in. Psychological Bulletin, 127(3):358–375.

    Google Scholar 

  29. Karlin, S. (2011). Raiding iron mans closet. IEEE Spectrum, 48(8):25–25.

    Google Scholar 

  30. Nef, T., Colombo, G., and Riener, R. (2005). Armin. Roboter fur die Bewegungstherapie der oberen Extremitat. Automatisierungstechnik, 53(12):597–606.

    Google Scholar 

  31. Mihelj, M., Nef, T., and Riener, R. (2007). ARMin II – 7 DoF rehabilitation robot: mechanics and kinematics. In Proceedings of the IEEE International Conference on Robotics and Automation, pages 4120–4125.

    Google Scholar 

  32. Housman, S. J., Kelly, L., Scott, M., and Reinkensmeyer, D. J. (2009). A Randomized Controlled Trial of Gravity-Supported, Computer-Enhanced Arm Exercise for Individuals With Severe Hemiparesis. Neurorehabilitation and Neural Repair, 23:505–514.

    Google Scholar 

  33. Suzuki, K., Mito, G., Kawamoto, H., Hasegawa, Y., and Sankai, Y. (2007). Intention- based walking support for paraplegia patients with Robot Suit HAL. Advanced Robotics, 21(12):1441–1469.

    Google Scholar 

  34. Zoss, A., Kazerooni, H., and Chu, A. (2006). Biomechanical design of the Berkeley lower extremity exoskeleton (BLEEX). IEEE/ASME Transactions on Mechatron- ics, 11(2):128–138.

    Google Scholar 

  35. Folgheraiter, M., Bongardt, B., Albiez, J., and Kirchner, F. (2008). A bio-inspired haptic interface for tele-robotics applications. In IEEE International Conference on Robotics and Biomemetics (ROBIO 2008), pages 560-565, Bangkok.

    Google Scholar 

  36. Folgheraiter, M., Kirchner, E. A., Seeland, A., Kim, S. K., Jordan, M., Wohrle, H., Bongardt, B., Schmidt, S., Albiez, J., and Kirchner, F. (2011). A multimodal brain-arm interface for operation of complex robotic systems and upper limb motor recovery. In Vieira, P., Fred, A., Filipe, J., and Gamboa, H., editors, Proceedings of the 4th International Conference on Biomedical Electronics and Devices (BIODEVICES- 11), pages 150–162, Rome. SciTePress.

    Google Scholar 

  37. Autexier, S., Hutter, D., and Stahl, C. (2013). In: Juan Carlos Augusto; Reiner Wichert (Hrsg.). Proceedings of the Fourth International Joint Conference on Ambient Intelligence. International Joint Conference on Ambient Intelligence (Aml-2013), December 3-5, Dublin, Ireland, Springer-Verlag, CCIS.

    Google Scholar 

  38. Bergmann, K., Kahl, S., and Kopp, S. (2013). Modeling the semantic coordination of speech and gesture under cognitive and linguistic constraints. In Aylett, R., Krenn, B., Pelachaud, C., and Shimodaira, H., editors, Intelligent Virtual Agents, volume 8108 of Lecture Notes in Computer Science, pages 203–216. Springer: Berlin, Heidelberg.

    Google Scholar 

  39. Sadeghipour, A. and Kopp, S. (2011). Embodied gesture processing: Motor-based integration of perception and action in social artificial agents. Cognitive Computation, 3(3):419–435.

    Google Scholar 

  40. Wimmer, M., MacDonald, B. A., Jayamuni, D., and Yadav, A. (2008). Facial expression recognition for human-robot interaction – a prototype. In Sommer, G. and Klette, R., editors, RobVis, volume 4931 of Lecture Notes in Computer Science, pages 139–152. Springer.

    Google Scholar 

  41. Giorgana, G. and Ploeger, P. G. (2011). Facial expression recognition for domestic service robots. In Rofer, T., Mayer, N. M., Savage, J., and Saranli, U., editors, RoboCup, volume 7416 of Lecture Notes in Computer Science, pages 353–364. Springer.

    Google Scholar 

  42. Mayer, H., Gomez, F., Wierstra, D., Nagy, I., Knoll, A., and Schmidhuber, J. (2006). A system for robotic heart surgery that learns to tie knots using recurrent neural networks. In Intelligent Robots and Systems, 2006 IEEE/RSJ International Conference on, pages 543–548.

    Google Scholar 

  43. Riga, C., Bicknell, C., Cheshire, N., and Hamady, M. (2009). Initial clinical application of a robotically steerable catheter system in endovascular aneurysm repair. Journal of Endovascular Therapy, 16(2):149–153.

    Google Scholar 

  44. Van den Berg, J., Miller, S., Duckworth, D., Hu, H., Wan, A., Fu, X.-Y., Goldberg, K., and Abbeel, P. (2010). Superhuman performance of surgical tasks by robots using iterative learning from human-guided demonstrations. In IEEE International Conference on Robotics and Automation (ICRA), 2010, pages 2074–2081.

    Google Scholar 

  45. Weede, O., Monnich, H., Muller, B., and Worn, H. (2011). An intelligent and autonomous endoscopic guidance system for minimally invasive surgery. In IEEE International Conference on Robotics and Automation (ICRA), 2011, pages 57625768.

    Google Scholar 

  46. Tenorth, M. and Beetz, M. (2013). KnowRob-A Knowledge Processing Infrastructure for Cognition-enabled Robots. Part 1: The KnowRob System. International Journal of Robotics Research (IJRR), 32(5):566–590.

    Google Scholar 

  47. Gerson, A. D., Parra, L. C. and Sajda, P. (2006). Cortically-coupled computer vision for rapid image search. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2(14):174–179.

    Google Scholar 

  48. Allanson, J. and Fairclough, S. (2004). A research agenda for physiological computing. Interacting with Computers, 16(5):857–878.

    Google Scholar 

  49. Woods, D. D. (1996). Decomposing Automation: Apparent Simplicity, Real Complexity, chapter 1, pages 3–17. CRC.

    Google Scholar 

  50. Prinzel, L. J., Freeman, F. G., Scerbo, M. W., Mikulka, P. J., and Pope, A. T. (2000). A closed-loop system for examining psychophysiological measures for adaptive task allocation. The International Journal of Aviation Psychology, 10(4):393–410.

    Google Scholar 

  51. Freeman, F., Mikulka, P., Prinzel, L., and Scerbo, M. (1999) Evaluation of an adaptive automation system using three EEG indices with a visual tracking task. Biological Psychology, 50(1):61–76.

    Google Scholar 

  52. Libet, B., Gleason, C. A., Wright, E. W., and Pearl, D. K. (1983). Time of conscious intention to act in relation to onset of cerebral activity (readiness-potential). The unconscious initiation of a freely voluntary act. Brain, 106(Pt 3):623–642.

    Google Scholar 

  53. Shibasaki, H. and Hallett, M. (2006). What is the Bereitschaftspotential? Clinical Neurophysiology, 117(11):2341–2356.

    Google Scholar 

  54. Coles, M. (1989). Modern Mind-Brain Reading: Psychophysiology, Physiology, and Cognition. Psychophysiology, 26(3):251–269.

    Google Scholar 

  55. Krell, M. M., Straube, S., Seeland, A., Wohrle, H., Teiwes, J., Metzen, J. H., Kirchner, E. A., and Kirchner, F. (2013). pySPACE – a signal processing and classification environment in Python. Frontiers in Neuroinformatics, 7(40).

    Google Scholar 

  56. Kirchner, E. A. (2014). Embedded Brain Reading, University of Bremen, Bremen, Germany, http://nbn-resolving.de/urn:nbn:de:gbv:46-00103734–14.

  57. Kirchner, E. A., Tabie, M., and Seeland, A. (2014). Multimodal movement prediction – towards an individual assistance of patients. PLoS ONE, 9(1):e85060.

    Google Scholar 

  58. Kober, J. and Peters, J. (2012). Reinforcement learning in robotics: A survey. In Wier- ing, M. and Otterlo, M., editors, Reinforcement Learning, volume 12 of Adaptation, Learning, and Optimization, pages 579610. Springer: Berlin, Heidelberg.

    Google Scholar 

  59. Thrun, S. and Mitchell, T. M. (1995). Lifelong robot learning. In: L. Steels (ed.) The Biology and Technology of Intelligent Autonomous Agents, 144, pp. 165–196. Springer Berlin Heidelberg.

    Google Scholar 

  60. Silver, D. L. and Yang, Q., Li, L. (2013). Lifelong machine learning systems: Beyond learning algorithms. In: 2013 AAAI Spring Symposium Series.

    Google Scholar 

  61. Metzen, J. H., Fabisch, A., Senger, L., de Gea Fernandez, J. and Kirchner, E. A. (2013). Towards learning of generic skills for robotic manipulation. KI – Kunstliche Intelligenz, pages 1–6.

    Google Scholar 

  62. Dindo, H., Chella, A., Tona, G. L., Vitali, M., Nivel, E. and Thorisson, K. R. (2011). Learning problem solving skills from demonstration: An architectural approach. In Schmidhuber, J., Thorisson, K. R., and Looks, M., editors, AGI, volume 6830 of Lecture Notes in Computer Science, pages 194–203. Springer.

    Google Scholar 

  63. Argall, B. D., Chernova, S., Veloso, M. and Browning, B. (2009). A survey of robot learning from demonstration. Robotics and Autonomous Systems, 57(5):469–483.

    Google Scholar 

  64. Schaal, S. (1997). Learning from demonstration. In Advances in Neural Information Processing Systems 9. MIT Press.

    Google Scholar 

  65. Schaal, S., Ijspeert, A., and Billard, A. (2003). Computational approaches to motor learning by imitation. Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences 358(1431), 537–547.

    Google Scholar 

  66. Ito, M. and Tani, J. (2004). On-line Imitative Interaction with a Humanoid Robot Using a Dynamic Neural Network Model of a Mirror System. Adaptive Behavior 12(2), 93–115.

    Google Scholar 

  67. Leon, A., Morales, E. F., Altamirano, L., and Ruiz, J. R. (2011). Teaching a Robot to Perform Task through Imitation and On-line Feedback. Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications, Lecture Notes in Computer Science, 7042, 549–556.

    Google Scholar 

  68. Poubel, L. P., Sakka, S., Cehajic, D., and Creusot, D. (2014). Support changes during online human motion imitation by a humanoid robot using task specification. In: IEEE International Conference on Robotics and Automation (ICRA), 1782–1787.

    Google Scholar 

  69. Taylor, M. and Stone, P. (2009). Transfer learning for reinforcement learning domains: A survey. Journal of Machine Learning Research, 10(1):16331685.

    Google Scholar 

  70. da Silva, B. C., Konidaris, G., and Barto, A. G. (2012). Learning parameterized skills. In: Proceedings of the 29th International Conference on Machine Learning (ICML 2012). Edinburgh, Scotland.

    Google Scholar 

  71. Ruvolo, P. and Eaton, E. (2013). Active task selection for lifelong machine learning. In: Twenty-Seventh AAAI Conference on Articial Intelligence.

    Google Scholar 

  72. da Silva, B., Konidaris, G., and Barto, A. (2014). Active Learning of Parameterized Skills. In: Proceedings of the 31st International Conference on Machine Learning (ICML 2014).

    Google Scholar 

  73. Fabisch, A. and Metzen, J. (2014). Active Contextual Policy Search. Journal of Machine Learning Research, 15:3371–3399.

    Google Scholar 

  74. Senger, L., Schroer, M., Metzen, J., and Kirchner, E. A. (2014). Velocity-Based Multiple Change-point Inference for Unsupervised Segmentation of Human Movement Behavior. In: Proceedings of the 22nd International Conference on Pattern Recognition (ICPR 2014).

    Google Scholar 

  75. Deisenroth, M. P., Neumann, G., and Peters, J. (2013). A survey on policy search for robotics. Foundations and Trends in Robotics 2(12), 328373.

    Google Scholar 

  76. Daniel, C., Neumann, G., and Peters, J. (2013). Learning Sequential Motor Tasks. In: Proceedings of 2013 IEEE International Conference on Robotics and Automation (ICRA).

    Google Scholar 

  77. Haddadin, S., Albu-Schffer, A., and Hirzinger, G. (2009). Requirements for safe robots: Measurements, analysis and new insights. In: The International Journal of Robotics Research (IJRR), 28(11–12), 1507–1527.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Elsa Andrea Kirchner .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer Fachmedien Wiesbaden

About this chapter

Cite this chapter

Kirchner, E., de Gea Fernandez, J., Kampmann, P., Schröer, M., Metzen, J., Kirchner, F. (2015). Intuitive Interaction with Robots – Technical Approaches and Challenges. In: Drechsler, R., Kühne, U. (eds) Formal Modeling and Verification of Cyber-Physical Systems. Springer Vieweg, Wiesbaden. https://doi.org/10.1007/978-3-658-09994-7_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-658-09994-7_8

  • Published:

  • Publisher Name: Springer Vieweg, Wiesbaden

  • Print ISBN: 978-3-658-09993-0

  • Online ISBN: 978-3-658-09994-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics