Abstract
In this paper, two novel recurrent algorithms for selecting the maxima of a set S containing M positive real numbers are introduced. In the first one the aim is to determine a threshold T such that only the maxima of S lie above it, while in the second one, each element of S is reduced independently of the rest until either it becomes zero (if it is non-maximum) or it freezes to a positive value (if it is a maximum). Convergence analysis of both schemes as well as neural network implementations of simplified versions of the algorithms are given. Finally, a comparison of the performance of the proposed algorithms with other related methods is carried out.
Similar content being viewed by others
References
Chen, W. T. and Hsieh, K. R.: A neural sorting network with O (1)complexity, Inform. Process. Lett. 45 (6)(1993), 309–313.
Chen, C-M., Hsu, M-H. and Wang, T-Y.: A fast winner-take-all neural network with the dynamic ratio, J. Inform. Sci. and Eng., 18 (2002), 211–222.
Floreen, P.: The convergence of the Hamming memory networks, IEEE Trans. Neural Networks 2 (4)(1991).
Koutroumbas, K. and Kalouptsidis, N.: Qualitative analysis of the parallel and asynchro-nous modes of the Hamming network, IEEE Trans. Neural Networks 5 (3)(1994), 380–391.
Koutroumbas, K. and Kalouptsidis, N.: Neural networks architectures for selecting the maximum input, Int. J. Comput. Math., 67 (1998), 25–32.
Kwan, H. K.: One-layer feedforward neural network for fast maximum/minimum deter-mination, Electron. Lett., 28 (17)(1992), 1583–1585.
Lin, S. S. and Hsu, S. H.: A low-cost neural sorting network with O (1)time complexity, Neurocomputing-Int. J. 14 (1996), 289–299.
Lippmann, R. P.: An introduction to computing with neural nets, IEEE ASSP Mag. 4 (2) (1987).
Lippmann, R. P., Bold, B. and Malpass, M. L.: A comparison of Hamming and Hopfield neural nets for pattern classification, Technical report No 769, Lincoln Laboratory, MIT, May 1987.
Melijson, I., Ruppin, E. and Sipper, M.: A single-iteration threshold Hamming network, IEEE Trans. Neural Networks 6 (1995), 261–266.
Tseng, Y. H. and Wu, J. L.: On a constant-time, low-complexity winner-take-all neural network, IEEE Trans. Comput., 44 (4)(1995), 601–604.
Yadid-Pecht, O. and Gur, M.: A biologically-inspired improved Maxnet, IEEE Trans. Neural Networks 6 (3)(1995), 757–759.
Yang, J. F., Chen, C. M., Wang, W. C. and Lee, J. Y.: A general mean based iteration winner-take-all neural network, IEEE Trans. Neural Networks 6 (1)(1995), 14–24.
Yang, J. F. and Chen, C. M.: Winner-take-all neural networks using the highest thresh-old, IEEE Trans. Neural Networks 11 (1)(2000), 194–199.
Yen, J. C. and Chang, S.: Improved winner-take-all neural network, Electron. Lett., (1992), 662–664.
Yen, J. C., Chang, F. J. and Chang, S.: A new winners-take-all architecture in artificial neural networks, IEEE Trans. Neural Networks 5 (5)(1994), 838–843.
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Koutroumbas, K. Recurrent Algorithms for Selecting the Maximum Input. Neural Processing Letters 20, 179–197 (2004). https://doi.org/10.1007/s11063-004-2016-6
Issue Date:
DOI: https://doi.org/10.1007/s11063-004-2016-6