Abstract
Recently a research trend of learning algorithms by means of deep learning techniques has started. Most of these are different implementations of the controller-interface abstraction: they use a neural controller as a “processor" and provide different interfaces for input, output and memory management. In this trend, we consider of particular interest the Neural Random-Access Machines, called NRAM, because this model is also able to solve problems which require indirect memory references. In this paper we propose a version of the Neural Random-Access Machines, where the core neural controller is trained with Differential Evolution meta-heuristic instead of the usual backpropagation algorithm. Some experimental results showing that this approach is effective and competitive are also presented.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Baioletti, M., Di Bari, G., Poggioni, V., Tracolli, M.: Can differential evolution be an efficient engine to optimize neural networks? In: Nicosia, G., Pardalos, P., Giuffrida, G., Umeton, R. (eds.) MOD 2017. LNCS, vol. 10710, pp. 401–413. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-72926-8_33
Das, S., Mullick, S.S., Suganthan, P.N.: Recent advances in differential evolution an updated survey. Swarm Evol. Comput. 27, 1–30 (2016)
Swagatam Das and Ponnuthurai Nagaratnam Suganthan: Differential evolution: a survey of the state-of-the-art. IEEE Trans. Evol. Comput. 15(1), 4–31 (2011)
Di Bari, G., Poggioni, V., Baioletti, M., Tracolli, M.: Differential evolution for learning large neural networks. Technical report (2018). https://github.com/Gabriele91/DENN-RESULTS-2018
Graves, A., Wayne, G., Danihelka, I.: Neural turing machines. CoRR abs/1410.5401 (2014)
Graves, A., et al.: Hybrid computing using a neural network with dynamic external memory. Nature 538(7626), 471–476 (2016)
Greve, R.B., Jacobsen, E.J., Risi, S.: Evolving neural turing machines for reward-based learning. In: Proceedings of the Genetic and Evolutionary Computation Conference 2016, GECCO 2016, pp. 117–124. ACM, New York (2016)
Joulin, A., Mikolov, T.: Inferring algorithmic patterns with stack-augmented recurrent nets. In: Proceedings of the GECCO 2016, pp. 190–198 (2015)
Kurach, K., Andrychowicz, M., Sutskever, I.: Neural random-access machines. CoRR abs/1511.06392 (2015)
Morse, G., Stanley, K.O.: Simple evolutionary optimization can rival stochastic gradient descent in neural networks. In: Proceedings of the GECCO 2016, pp. 477–484 (2016)
Zaremba, W., Mikolov, T., Joulin, A., Fergus, R.: Learning simple algorithms from examples. In: Proceedings of the 33rd International Conference on International Conference on Machine Learning, ICML 2016, vol. 48, pp. 421–429. JMLR.org (2016)
Zaremba, W., Sutskever, I.: Reinforcement learning neural turing machines. CoRR abs/1505.00521 (2015)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Switzerland AG
About this paper
Cite this paper
Baioletti, M., Belli, V., Di Bari, G., Poggioni, V. (2018). Neural Random Access Machines Optimized by Differential Evolution. In: Ghidini, C., Magnini, B., Passerini, A., Traverso, P. (eds) AI*IA 2018 – Advances in Artificial Intelligence. AI*IA 2018. Lecture Notes in Computer Science(), vol 11298. Springer, Cham. https://doi.org/10.1007/978-3-030-03840-3_23
Download citation
DOI: https://doi.org/10.1007/978-3-030-03840-3_23
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-03839-7
Online ISBN: 978-3-030-03840-3
eBook Packages: Computer ScienceComputer Science (R0)