Skip to main content

Advertisement

Log in

A Semantic-Based Method for Teaching Industrial Robots New Tasks

  • Project Report
  • Published:
KI - Künstliche Intelligenz Aims and scope Submit manuscript

Abstract

This paper presents the results of the Artificial Intelligence (AI) method developed during the European project “Factory-in-a-day”. Advanced AI solutions, as the one proposed, allow a natural Human–Robot-collaboration, which is an important capability of robots in industrial warehouses. This new generation of robots is expected to work in heterogeneous production lines by efficiently interacting and collaborating with human co-workers in open and unstructured dynamic environments. For this, robots need to understand and recognize the demonstrations from different operators. Therefore, a flexible and modular process to program industrial robots has been developed based on semantic representations. This novel learning by demonstration method enables non-expert operators to program new tasks on industrial robots.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Notes

  1. http://www.factory-in-a-day.eu/

  2. The user uses a Graphical User Interface to identify the beginning and end of a task.

  3. The information from the object is obtained either from the vision system or the proximity sensor on the skin. The same is valid for the property ObjectInHand.

  4. The ground-truth was manually labeled by a person considered as an expert since this person received a training session about labeling activities.

  5. The stopping criterion indicates when a process should stop, e.g., duration of the process, the weight of the products, or the number of objects.

  6. This scenario was inspired by the standard process of orange sorting where humans use their tactile sensation to discriminate good versus bad oranges.

  7. One participant was a robotic expert and the other non-expert. We are planning to extend this study to a larger group of participants.

  8. Note that the data from the robot Kinesthetic demonstrations was not used to improve in any way the semantic models \(T_{\textit{sorting}}\).

  9. The knowledge-base used in this experiment contains several squeezable fruits such as oranges, limes, and mandarins. This ontology also contains information about fruits that are not squeezable such as apples and pineapples. Therefore, the proposed system can generalize to the different fruits defined in the ontology domain.

  10. The following link, https://youtu.be/Ti393hP_Z_g [10] presents a video of the obtained results.

References

  1. Aggarwal JK, Ryoo MS (2011) Human activity analysis: a review. ACM Comput Surv 43(3):16

    Article  Google Scholar 

  2. Aksoy EE, Abramov A, Dörr J, Ning K, Dellen B, Wörgötter F (2011) Learning the semantics of object-action relations by observation. Int J Robot Res 30(10):1229–1249

    Article  Google Scholar 

  3. Antol S, Zitnick CL, Parikh D (2014) Zero-shot learning via visual abstraction. In: Fleet D, Pajdla T, Schiele B, Tuytelaars T (eds) Computer vision – ECCV 2014. Lecture notes in computer science, vol 8692. Springer, Cham

    Google Scholar 

  4. Bates T, Ramirez-Amaro K, Inamura T, Cheng G (2017) On-line simultaneous learning and recognition of everyday activities from virtual reality performances. In: IEEE/RSJ international conference on intelligent robots and systems (IROS), pp 3510–3515. IEEE

  5. Beetz M, Tenorth M, Jain D, Bandouch J (2010) Towards automated models of activities of daily life. Technol Disab 22:27–40

    Google Scholar 

  6. Billard A, Calinon S, Dillmann R, Schaal S (2008) Robot programming by demonstration. In: Siciliano B, Khatib O (eds) Springer handbook of robotics. Springer, Berlin, Heidelberg

    Google Scholar 

  7. Calinon S, D’halluin F, Sauser EL, Caldwell DG, Billard AG (2010) Learning and reproduction of gestures by imitation: an approach based on hidden markov model and gaussian mixture regression. Robot Autom Mag 17(2):44–54

    Article  Google Scholar 

  8. Cheng G, Ramirez-Amaro K, Beetz M, Kuniyoshi Y (2019) Purposive learning: robot reasoning about the meanings of human activities. Sci Robot 4(26). https://doi.org/10.1126/scirobotics.aav1530

  9. Dean-Leon E, Pierce B, Bergner F, Mittendorfer P, Ramirez-Amaro K, Burger W, Cheng G (2017) TOMM: tactile omnidirectional mobile manipulator. In: IEEE international conference on robotics and automation (ICRA), pp 2441–2447

  10. Dean-Leon EC, Ramirez-Amaro K, Bergner F, Dianov I, Cheng G (2018) Integration of robotic technologies for rapidly deployable robots. IEEE Trans Ind Inf 14(4):1691–1700

    Article  Google Scholar 

  11. Dean-Leon EC, Ramirez-Amaro K, Bergner F, Dianov I, Lanillos P, Cheng G (2016) Robotic technologies for fast deployment of industrial robot systems. In: IECON, IEEE, pp 6900–6907

  12. Dianov I, Ramírez-Amaro K, Lanillos P, Dean-Leon E, Bergner F, Cheng G (2016) Extracting general task structures to accelerate the learning of new tasks. In: IEEE-RAS 16th international conference on humanoid robots (Humanoids), pp 802–807

  13. Dillmann R, Asfour T, Do M, Jäkel R, Kasper A, Azad P, Ude A, Schmidt-Rohr SR, Lösch M (2010) Advances in robot programming by demonstration. KI 24(4):295–303

    Google Scholar 

  14. Ko WKH, Wu Y, Tee KP, Buchli J (2015) Towards industrial robot learning from demonstration. In: Lee M, Omori T, Osawa H, Park H, Young JE (eds) HAI. ACM, New York, pp 235–238

    Google Scholar 

  15. Kormushev P, Calinon S, Caldwell DG (2011) Imitation learning of positional and force skills demonstrated via kinesthetic teaching and haptic input. Adv Robot 25(5):581–603

    Article  Google Scholar 

  16. Kuniyoshi Y, Inoue H (1993) Qualitative recognition of ongoing human action sequences. In: Bajcsy R (ed) IJCAI. Morgan Kaufmann, Burlington, pp 1600–1609

    Google Scholar 

  17. Lei J, Ren X, Fox D (2012) Fine-Grained Kitchen Activity Recognition using RGB-D. In: The 14th international conference on ubiquitous computing (Ubicomp 2012)

  18. Quinlan R (1993) C4.5: programs for machine learning. Morgan Kaufmann Publishers, San Mateo

    Google Scholar 

  19. Ramirez-Amaro K, Beetz M, Cheng G (2014) Automatic segmentation and recognition of human activities from observation based on semantic reasoning. In: IEEE/RSJ international conference on intelligent robots and systems (IROS), IEEE, pp 5043–5048

  20. Ramirez-Amaro K, Beetz M, Cheng G (2015) Understanding the intention of human activities through semantic perception: observation, understanding and execution on a humanoid robot. Adv Robot 29(5):345–362

    Article  Google Scholar 

  21. Ramirez-Amaro K, Beetz M, Cheng G (2017) Transferring skills to humanoid robots by extracting semantic representations from observations of human activities. Artif Intell 247:95–118 (special issue on AI and robotics)

    Article  MathSciNet  MATH  Google Scholar 

  22. Ramirez-Amaro K, Dean-Leon EC, Dianov I, Bergner F, Cheng G (2016) General recognition models capable of integrating multiple sensors for different domains. In: Humanoids, IEEE, pp 306–311

  23. Ramirez-Amaro K, Inamura T, Dean-Leon EC, Beetz M, Cheng G (2014) Bootstrapping humanoid robot skills by extracting semantic representations of human-like activities from virtual reality. In: Humanoids, IEEE, pp 438–443

  24. Ramirez-Amaro K, Minhas HN, Zehetleitner M, Beetz M, Cheng G (2017) Added value of gaze-exploiting semantic representation to allow robots inferring human behaviors. ACM Trans Interact Intell Syst 7(1):5:1–5:30

    Article  Google Scholar 

  25. Summers-Stay D, Teo CL, Yang Y, Fermüller C, Aloimonos Y (2012) Using a minimal action grammar for activity understanding in the real world. In: IEEE/RSJ international conference on intelligent robots and systems (IROS), IEEE, pp 4104–4111

  26. Tenorth M, Beetz M (2017) Representations for robot knowledge in the KnowRob framework. Artif Intell 247:151–169 (special issue on AI and robotics)

    Article  MathSciNet  MATH  Google Scholar 

  27. Wörgötter F, Agostini A, Krüger N, Shylo N, Porr B (2009) Cognitive agents—a procedural perspective relying on the predictability of Object-Action-Complexes (OACs). Robot Auton Syst 57(4):420–432

    Article  Google Scholar 

Download references

Acknowledgements

We would like to thank our colleagues Katharina Stadler and Wibke Borngesser for all their support during the project Factory-in-a-day.

This work was supported by the European Community Seventh Framework Programme (FP7/2007-2013) under Grant agreement no. 609206 and it has been (partially) supported by the German Research Foundation DFG, as part of Collaborative Research Center (Sonderforschungsbereich) 1320 “EASE—Everyday Activity Science and Engineering”, University of Bremen.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Karinne Ramirez-Amaro.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ramirez-Amaro, K., Dean-Leon, E., Bergner, F. et al. A Semantic-Based Method for Teaching Industrial Robots New Tasks. Künstl Intell 33, 117–122 (2019). https://doi.org/10.1007/s13218-019-00582-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13218-019-00582-5

Keywords

Navigation