Skip to main content
Log in

On the Flexible Dynamics Analysis for the Unified Discrete-Time RNNs

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

The uniformly pseudo-projection-anti-monotone (UPPAM) networks can jointly cover almost all of the known recurrent neural network individuals. In this paper, we develop some convergence theory for the UPPAM networks when the time is discrete. The results for convergence to an equilibrium as well as to the ring whose period is not greater than 2 for the UPPAM networks do not require the connective weight matrices to be symmetric anymore, which is the basic requirements for many existing dynamics analysis for the discrete-time recurrent neural network models. In addition, these theorems contain the least constraints, give the general determinate methods of convergence for UPPAM networks, and can be verified and utilized very easily. The study shows that the approach adopted in the present paper is powerful, particularly in the sense of unifying, simplifying and extending the currently existing various dynamics results for discrete-time RNNs.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  1. Liu Q, Cao J, Chen G (2010) A novel recurrent neural network with finite-time convergence for linear programming. Neural Comput 22(11):2962–2978

    Article  MathSciNet  Google Scholar 

  2. Yang Y, Cao J (2008) A feedback neural network for solving convex constraint optimization problems. Appl Math Comput 201(1–2):340–350

    MathSciNet  MATH  Google Scholar 

  3. Liu Q, Dang C, Cao J (2010) A novel recurrent neural network with one neuron and finite-time convergence for \(k\)-winners-take-all operation. IEEE Trans Neural Netw 21(7):1140–1148

    Google Scholar 

  4. Liu Q, Cao J (2010) A recurrent neural network based on projection operator for extended general variational inequalities. IEEE Trans Syst Man Cybern Part B 40(3):928–938

    Article  Google Scholar 

  5. Liu Q, Cao J (2011) Global exponential stability of discrete-time recurrent neural network for solving quadratic programming problems subject to linear constraints. Neurocomputing 74(17):3494–3501

    Article  Google Scholar 

  6. Almeida LB (1989) Backpropagation in perceptrons with feedback. In: Eckmiller R, vd Malsburg C (eds) Neural Computers. Springer Study Edition, vol 41. Springer, Berlin, Heidelberg, pp 199–208

  7. Anderson JA, Silverstein JW, Ritz SA, Jones RS (1977) Distinctive features, categorical perception, and probability learning: some applications of a neural model. Psychol Rev 84(5):413

    Article  Google Scholar 

  8. Banzhaf W (1987) Towards continuous models of memory. In: Proceedings IEEE 1st International Conference Neural Nets, San iego, pp 223–230

  9. Bouzerdoum A, Pattison TR (1993) Neural network for quadratic optimization with bound constraints. IEEE Trans Neural Netw 4(2):293–304

    Article  Google Scholar 

  10. Chen ZY, Kwong CP, Xu ZB (1995) Multiple-valued feedback and recurrent correlation neural networks. Neural Comput Appl 3(4):242–250

    Article  Google Scholar 

  11. Chiueh TD, Goodman RM (1991) Recurrent correlation associative memories. IEEE Trans Neural Netw 2(2):275–284

    Article  Google Scholar 

  12. Forti M, Tesi A (1995) New conditions for global stability of neural networks with application to linear and quadratic programming problems. IEEE Trans Circuits Syst I Fundam Theory Appl 42(7):354–366

    Article  MathSciNet  Google Scholar 

  13. Goles-Chacc E, Fogelman-Soulié F, Pellegrin D (1985) Decreasing energy functions as a tool for studying threshold networks. Discrete Appl Math 12(3):261–277

    Article  MathSciNet  Google Scholar 

  14. Hopfield JJ (1982) Neural networks and physical systems with emergent collective computational abilities. Proc Natl Acad Sci 79(8):2554–2558

    Article  MathSciNet  Google Scholar 

  15. Hopfield JJ, Tank DW et al (1986) Computing with neural circuits- a model. Science 233(4764):625–633

    Article  Google Scholar 

  16. Hui S, Zak SH (1992) Dynamical analysis of the brain-state-in-a-box (bsb) neural models. IEEE Trans Neural Netw 3(1):86–94

    Article  Google Scholar 

  17. Hui S, Zak SH (1994) Dynamical analysis of the brain-state-in-a-box (bsb) neural models. IEEE Trans Neural Netw 3(1):86–94

    Article  Google Scholar 

  18. Kosko B (1988) Bidirectional associative memories. IEEE Trans Syst Man Cybern 18(1):49–60

    Article  MathSciNet  Google Scholar 

  19. Little WA (1974) The existence of persistent states in the brain. Springer, US

    MATH  Google Scholar 

  20. Marcus C, Westervelt R (1989) Dynamics of iterated-map neural networks. Phys Rev A 40(1):501

    Article  Google Scholar 

  21. Perez-Ilzarbe MJ (1998) Convergence analysis of a discrete-time recurrent neural network to perform quadratic real optimization with bound constraints. IEEE Trans Neural Netw 9(6):1344–1351

    Article  Google Scholar 

  22. Peterson C (1987) A mean field theory learning algorithm for neural networks. Complex Syst 1:995–1019

    MATH  Google Scholar 

  23. Pineda FJ (1987) Generalization of back-propagation to recurrent neural networks. Phys Rev Lett 59(19):2229

    Article  MathSciNet  Google Scholar 

  24. Qiao C, Xu ZB (2007) New critical analysis on global convergence of recurrent neural networks with projection mappings. Adv Neural Netw ISNN 2007:131–139

    Google Scholar 

  25. Qiao H, Peng J, Xu ZB, Zhang B (2003) A reference model approach to stability analysis of neural networks. IEEE Trans Syst Man Cybern Part B (Cybernetics) 33(6):925–936

    Article  Google Scholar 

  26. Seiler G, Nossek JA (1993) Winner-take-all cellular neural networks. IEEE Trans Circuits Syst II Analog Digital Signal Process 40(3):184–190

    Article  Google Scholar 

  27. Si J, Michel AN (1995) Analysis and synthesis of a class of discrete-time neural networks with multilevel threshold neurons. IEEE Trans Neural Netw 6(1):105–116

    Article  Google Scholar 

  28. Xu Z, Kwong C (1995) Global convergence and asymptotic stability of asymmetric hopfield neural networks. J Math Anal Appl 191(3):405–427

    Article  MathSciNet  Google Scholar 

  29. Xu ZB, Hu GQ, Kwong CP (1996) Asymmetric hopfield-type networks: theory and applications. Neural Netw 9(3):483–501

    Article  Google Scholar 

  30. Xu ZB, Qiao C (2011) Towards a unified recurrent neural network theory: the uniformly pseudo-projection-anti-monotone net. Acta Math Sin 27(2):377–396

    Article  MathSciNet  Google Scholar 

  31. Yi Z, Heng PA, Fung PF (2000) Winner-take-all discrete recurrent neural networks. IEEE Trans Circuits Syst II Analog Digital Signal Process 47(12):1584–1589

    Article  Google Scholar 

  32. Yi Z, Tan KK (2004) Multistability of discrete-time recurrent neural networks with unsaturating piecewise linear activation functions. IEEE Trans Neural Netw 15(2):329–336

    Article  Google Scholar 

  33. Mccullagh P (2002) Quasi-symmetry and representation theory. Annales de la faculté des sciences de Toulouse Sér 11(4):541–561

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chen Qiao.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This research was supported by NSFC Nos. 11471006 and 11101327, National Science and Technology Cooperation Program of China (No. 2015DFA81780), the Fundamental Research Funds for the Central Universities (No. xjj2017126) and was partly Supported by HPC Platform, Xi’an Jiaotong University.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Qiao, C., Guo, B. On the Flexible Dynamics Analysis for the Unified Discrete-Time RNNs. Neural Process Lett 50, 1755–1771 (2019). https://doi.org/10.1007/s11063-018-9959-5

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-018-9959-5

Keywords

Navigation