Abstract
Radial Basis Function-Neural Networks are well-established function approximators. This paper presents an adaptive Gaussian RBF-NN with an extended learning-while controlling behaviour. The weights, function centres and widths are updated online based on a sliding mode control element. In this way, the need for fixing parameters a priori is overcome and the network is able to adapt to dynamically changing systems. The aim of this work is to present an extended adaptive neuro-controller for trajectory tracking of serial robots with unknown dynamics. The adaptive RBF-NN is used to approximate the unknown robot manipulator dynamics-function. It is combined with a conventional controller and a bio-inspired extension for the control of a robot in the presence of switching constraints and discontinuous inputs. The controller-extension increases the robustness and adaptability of the system. Its learned goal-directed output results from the complementary action of an actuator, A, and a preventer, P. The trigger is an incentive, I, based on the weighted perception of the environment. The concept is validated through simulations and implementation on a KUKA LWR4-robot.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Micchelli, C.A.: Interpolation of scattered data: distance matrices and conditionally positive definite functions. Constr. Approximation 2, 11–22 (1986)
Bass, E., Lee, K.Y.: Robust control of nonlinear systems using norm-bounded neural networks. In: IEEE World Congress Computer Intelligence (Neural Networks part), pp. 2524–2529 (1994)
Van Cuong, P., Nan, W.Y.: Adaptive trajectory tracking neural network control with robust compensator for robot manipulators. Neural Comput. Appl. 27(2), 525–536 (2015). https://doi.org/10.1007/s00521-015-1873-4
Yu, L., Fei, S., Huang, J., Gao, Y.: Trajectory switching control of robotic manipulators based on RBF neural networks. Circuits Syst. Signal Process. 33, 1119–1133 (2014)
Tao, Y., Zheng, J., Lin, Y.: A sliding mode control-based on a RBF neural network for deburring industry robotic systems. Int. J. Adv. Robotic Syst. 13(1), 13–18 (2016). https://doi.org/10.5772/62002
Wang, L., Chai, T., Zhai, L.: Neural-network-based terminal sliding-mode control of robotic manipulators including actuator dynamics. IEEE Trans. Ind. Electron. 56(9), 3296–3304 (2009)
Ren, X., Rad, A.B., Lewis, F.L.: Neural network-based compensation control of robot manipulators with unknown dynamics. In: American Control Conference, pp. 13–18 (2007)
Otte, S., Zwiener, A., Butz, M.V.: Inherently constraint-aware control of many-joint robot arms with inverse recurrent models. In: Lintas, A., Rovetta, S., Verschure, P.F.M.J., Villa, Alessandro E.P. (eds.) ICANN 2017. LNCS, vol. 10613, pp. 262–270. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-68600-4_31
He, W., Dong, Y.: Adaptive fuzzy neural network control for a constrained robot using impedance learning. IEEE Trans. Neural Netw. Learn. Syst. 29(4), 1174–1186 (2018)
Klecker, S., Hichri, B., Plapper, P.: Neuro-inspired reward-based tracking control for robotic manipulators with unknown dynamics. In: 2nd International Conference on Robotics and Automation Engineering, pp. 21–25 (2017)
Krabbes, M., Döschner, C.: Modelling of robot dynamics based on a multi-dimensional RBF-like neural network. In: IEEE International Conference on Information, Intelligence, and Systems (1999)
Slotine, J.E., Li, W.: Applied Nonlinear Control. Prentice Hall, Englewood Cliffs (1991)
KUKA System Technology, KUKA Roboter GmbH: KUKA FastResearchInterface 1.0 For KUKA System Software 5.6 lr Version: KUKA FRI 1.0 V2 en. (2011)
Liberzon, D.: Switching in Systems and Control. Birkauser, Boston (2003)
Dayan, P., Berridge, K.C.: Model-based and model-free pavlovian reward learning: revaluation, revision and revelation. Cogn. Affect. Behav. Neurosci. 14(2), 473–492 (2014)
Kringelbach, M.L., Berridge, K.C.: Neuroscience of reward, motivation, and drive. In: Recent Developments in Neuroscience Research on Human Motivation, Advances in Motivation and Achievement, vol. 19, pp. 23–35 (2017)
Balkenius, C., Moren, J.: Emotional learning: a computational model of the amygdala. Int. J. Cybern. Syst. 32(6), 611–636 (2001)
Merrick, K.E.: Intrinsic motivation and introspection in reinforcement learning. IEEE Trans. Auton. Mental Develop. 4, 315–329 (2012)
Racca, M., Pajarinen, J., Montebelli, A., Kyrki, V.: Learning in-contact control strategies from demonstration. In: IROS (2016)
Acknowledgments
This work has been done in the framework of the European Union supported INTERREG GR-project “ROBOTIX-Academy”.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Switzerland AG
About this paper
Cite this paper
Klecker, S., Hichri, B., Plapper, P. (2018). Learning-While Controlling RBF-NN for Robot Dynamics Approximation in Neuro-Inspired Control of Switched Nonlinear Systems. In: Kůrková, V., Manolopoulos, Y., Hammer, B., Iliadis, L., Maglogiannis, I. (eds) Artificial Neural Networks and Machine Learning – ICANN 2018. ICANN 2018. Lecture Notes in Computer Science(), vol 11141. Springer, Cham. https://doi.org/10.1007/978-3-030-01424-7_70
Download citation
DOI: https://doi.org/10.1007/978-3-030-01424-7_70
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-01423-0
Online ISBN: 978-3-030-01424-7
eBook Packages: Computer ScienceComputer Science (R0)