Skip to main content
Log in

Recurrent Algorithms for Selecting the Maximum Input

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

In this paper, two novel recurrent algorithms for selecting the maxima of a set S containing M positive real numbers are introduced. In the first one the aim is to determine a threshold T such that only the maxima of S lie above it, while in the second one, each element of S is reduced independently of the rest until either it becomes zero (if it is non-maximum) or it freezes to a positive value (if it is a maximum). Convergence analysis of both schemes as well as neural network implementations of simplified versions of the algorithms are given. Finally, a comparison of the performance of the proposed algorithms with other related methods is carried out.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Chen, W. T. and Hsieh, K. R.: A neural sorting network with O (1)complexity, Inform. Process. Lett. 45 (6)(1993), 309–313.

    Google Scholar 

  2. Chen, C-M., Hsu, M-H. and Wang, T-Y.: A fast winner-take-all neural network with the dynamic ratio, J. Inform. Sci. and Eng., 18 (2002), 211–222.

    Google Scholar 

  3. Floreen, P.: The convergence of the Hamming memory networks, IEEE Trans. Neural Networks 2 (4)(1991).

  4. Koutroumbas, K. and Kalouptsidis, N.: Qualitative analysis of the parallel and asynchro-nous modes of the Hamming network, IEEE Trans. Neural Networks 5 (3)(1994), 380–391.

    Google Scholar 

  5. Koutroumbas, K. and Kalouptsidis, N.: Neural networks architectures for selecting the maximum input, Int. J. Comput. Math., 67 (1998), 25–32.

    Google Scholar 

  6. Kwan, H. K.: One-layer feedforward neural network for fast maximum/minimum deter-mination, Electron. Lett., 28 (17)(1992), 1583–1585.

    Google Scholar 

  7. Lin, S. S. and Hsu, S. H.: A low-cost neural sorting network with O (1)time complexity, Neurocomputing-Int. J. 14 (1996), 289–299.

    Google Scholar 

  8. Lippmann, R. P.: An introduction to computing with neural nets, IEEE ASSP Mag. 4 (2) (1987).

  9. Lippmann, R. P., Bold, B. and Malpass, M. L.: A comparison of Hamming and Hopfield neural nets for pattern classification, Technical report No 769, Lincoln Laboratory, MIT, May 1987.

  10. Melijson, I., Ruppin, E. and Sipper, M.: A single-iteration threshold Hamming network, IEEE Trans. Neural Networks 6 (1995), 261–266.

    Google Scholar 

  11. Tseng, Y. H. and Wu, J. L.: On a constant-time, low-complexity winner-take-all neural network, IEEE Trans. Comput., 44 (4)(1995), 601–604.

    Google Scholar 

  12. Yadid-Pecht, O. and Gur, M.: A biologically-inspired improved Maxnet, IEEE Trans. Neural Networks 6 (3)(1995), 757–759.

    Google Scholar 

  13. Yang, J. F., Chen, C. M., Wang, W. C. and Lee, J. Y.: A general mean based iteration winner-take-all neural network, IEEE Trans. Neural Networks 6 (1)(1995), 14–24.

    Google Scholar 

  14. Yang, J. F. and Chen, C. M.: Winner-take-all neural networks using the highest thresh-old, IEEE Trans. Neural Networks 11 (1)(2000), 194–199.

    Google Scholar 

  15. Yen, J. C. and Chang, S.: Improved winner-take-all neural network, Electron. Lett., (1992), 662–664.

  16. Yen, J. C., Chang, F. J. and Chang, S.: A new winners-take-all architecture in artificial neural networks, IEEE Trans. Neural Networks 5 (5)(1994), 838–843.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Koutroumbas, K. Recurrent Algorithms for Selecting the Maximum Input. Neural Processing Letters 20, 179–197 (2004). https://doi.org/10.1007/s11063-004-2016-6

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-004-2016-6

Navigation