Abstract
RAM-based neural networks are designed to be efficiently implemented in hardware. The desire to retain this property influences the training algorithms used, and has led to the use of reinforcement (reward-penalty) learning. An analysis of the reinforcement algorithm applied to RAM-based nodes has shown the ease with which unlearning can occur. An amended algorithm is proposed which demonstrates improved learning performance compared to previously published reinforcement regimes.
Similar content being viewed by others
References
H.Bolouri, P.Morgan and K.N.Gurney, “Design, manufacture, and evaluation of a scalable high-performance neural system”, Electronics Letters, Vol. 30 (3 March), pp. 426–427, 1994.
T.G.Clarkson, D.Gorse, J.G.Taylor and C.K.Ng, “Learning probabilistic RAM nets using VLSI structures’, IEEE Transactions on Computers, Vol. 41 (Dec.), pp. 1552–1561, 1992.
K.N.Gurney, “Learning in networks of structured hypercubes”, Technical Memorandum CN/R/144, Department of Electronical Engineering, Brunel University, UK, June 1989.
W.-K. Kan and I. Aleksander, “A probabilistic logic neuron for associative learning”. Proc. of the Int. Conf. on Neural Networks, pp. 541–548, 1987.
A.G.Bartlo and M.I.Jordan, “Gradient following without back-propagation in layered networks”, Proc. of the Int. Conf. on Neural Networks, Vol. 2, pp. 629–636, 1987.
J.Hertz, A.Krogh and R.G.Palmer, Introduction of the Theory of Neural Computation, Addison-Wesley: Redwood City, CA, 1991.
A. Ferguson, “Learning in RAM-based artificial neural networks”, PhD thesis, ERDC, University of Hertfordshire, UK, Dec. 1995.
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Ferguson, A., Bolouri, H. Improving reinforcement learning in stochastic RAM-based neural networks. Neural Process Lett 3, 11–15 (1996). https://doi.org/10.1007/BF00417784
Issue Date:
DOI: https://doi.org/10.1007/BF00417784