Skip to main content
Log in

General and Improved Five-Step Discrete-Time Zeroing Neural Dynamics Solving Linear Time-Varying Matrix Equation with Unknown Transpose

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

In this paper, a general five-step discrete-time zeroing neural dynamics (DTZND) model is proposed to solve linear time-varying matrix equation with unknown transpose. Specifically, the explicit continuous-time zeroing neural dynamics (CTZND) model is derived from the time-varying matrix equation with unknown transpose via Kronecker product and vectorization technique. Furthermore, a general five-step discretization formula is designed to approximate the first-order derivative of the target point, and the convergence condition is given. Thus, the general five-step DTZND model is obtained by using the general five-step discretization formula to discretize the CTZND model. Theoretical analyses present the stability and convergence of the proposed general five-step DTZND model. Numerical experiment results substantiate that the proposed DTZND model for solving linear time-varying matrix equation is stable and convergent with the theoretically analyzed errors. In addition, the improved DTZND models are provided in terms of accuracy and computational complexity, and verified by numerical experiments.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  1. Cichocki A, Unbehauen R (1992) Neural networks for solving systems of linear equations and related problems. IEEE Trans Circuits Syst 39:124–138

    MATH  Google Scholar 

  2. Pazos FA, Bhaya A (2009) Control Liapunov function design of neural networks that solve convex optimization and variational inequality problems. Neurocomputing 72:3863–3872

    Google Scholar 

  3. Hajarian M (2013) Matrix iterative methods for solving the Sylvester-transpose and periodic Sylvester matrix equations. J Frankl Inst 350:3328–3341

    MathSciNet  MATH  Google Scholar 

  4. Damm T, Hinrichsen D (2001) Newton’s method for a rational matrix equation occurring in stochastic control. Linear Algebra Appl 332–334:81–109

    MathSciNet  MATH  Google Scholar 

  5. Golub GH, Van Loan CF (1996) Matrix computations, 3rd edn. Johns Hopkins University Press, Baltimore

    MATH  Google Scholar 

  6. Ding F, Liu PX, Ding J (2008) Iterative solutions of the generalized Sylvester matrix equations by using the hierarchical identification principle. Appl Math Comput 197:41–50

    MathSciNet  MATH  Google Scholar 

  7. Peng ZY (2010) New matrix iterative methods for constraint solutions of the matrix equation \(AXB=C\). J Comput Appl Math 235:726–735

    MathSciNet  MATH  Google Scholar 

  8. Li SK, Huang TZ (2012) LSQR iterative method for generalized coupled Sylvester matrix equations. Appl Math Model 36:3545–3554

    MathSciNet  MATH  Google Scholar 

  9. Wang M, Cheng X, Wei M (2007) Iterative algorithms for solving the matrix equation \(AXB+CX^{\text{ T }}D=E\). Appl Math Comput 187:622–629

    MathSciNet  MATH  Google Scholar 

  10. Xie YJ, Ma CF (2016) The accelerated gradient based iterative algorithm for solving a class of generalized Sylvester-transpose matrix equation. Appl Math Comput 273:1257–1269

    MathSciNet  MATH  Google Scholar 

  11. Xie L, Liu Y, Yang H (2010) Gradient based and least squares based iterative algorithms for matrix equations \(AXB + CX^{\text{ T }}D = F\). Appl Math Comput 217:2191–2199

    MathSciNet  MATH  Google Scholar 

  12. Hu C, Jiang H, Teng Z (2010) Globally exponential stability for delayed neural networks under impulsive control. Neural Process Lett 31:105–127

    Google Scholar 

  13. Cheng L, Hou ZG, Tan M (2009) A simplified neural network for linear matrix inequality problems. Neural Process Lett 29:213–230

    Google Scholar 

  14. Liu Y, Wang Z, Liu X (2009) On global stability of delayed BAM stochastic neural networks with markovian switching. Neural Process Lett 30:19–35

    Google Scholar 

  15. Xiao J, Zeng Z, Shen W (2015) Passivity analysis of delayed neural networks with discontinuous activations. Neural Process Lett 42:215–232

    Google Scholar 

  16. Chen J, Kang X, Liu Y, Wang ZJ (2015) Median filtering forensics based on convolutional neural networks. IEEE Signal Process Lett 22:1849–1853

    Google Scholar 

  17. Le X, Yan Z, Xi J (2017) A collective neurodynamic system for distributed optimization with applications in model predictive control. IEEE Trans Emerg Top Comput Intell 1:305–314

    Google Scholar 

  18. Le X, Wang J (2017) A two-time-scale neurodynamic approach to constrained minimax optimization. IEEE Trans Neural Netw Learn Syst 28:620–629

    MathSciNet  Google Scholar 

  19. Li S, Chen S, Liu B (2013) Accelerating a recurrent neural network to finite-time convergence for solving time-varying Sylvester equation by using a sign-bi-power activation function. Neural Process Lett 37:189–205

    Google Scholar 

  20. Shen Y, Miao P, Huang Y, Shen Y (2015) Finite-time stability and its application for solving time-varying Sylvester equation by recurrent neural network. Neural Process Lett 42:763–784

    Google Scholar 

  21. Xiao L, Liao B, Li S, Chen K (2018) Nonlinear recurrent neural networks for finite-time solution of general time-varying linear matrix equations. Neural Netw 98:102–113

    Google Scholar 

  22. Zhang Z, Zheng L, Yu J, Li Y, Yu Z (2017) Three recurrent neural networks and three numerical methods for solving a repetitive motion planning scheme of redundant robot manipulators. IEEE/ASME Trans Mechatron 22:1423–1434

    Google Scholar 

  23. Jin L, Zhang Y (2016) Continuous and discrete Zhang dynamics for real-time varying nonlinear optimization. Numer Algorithms 73:115–140

    MathSciNet  MATH  Google Scholar 

  24. Zhang Y, Zhang Y, Chen D, Xiao Z, Yan X (2017) Division by zero, pseudo-division by zero, Zhang dynamics method and Zhang-gradient method about control singularity conquering. Int J Syst Sci 48:1–12

    MathSciNet  MATH  Google Scholar 

  25. Zhang Y, He L, Li S, Chen D, Ding Y (2017) Zeroing dynamics based motion control scheme for parallel manipulators. Electron Lett 53:74–75

    Google Scholar 

  26. Zhang Y, Jiang D, Wang J (2002) A recurrent neural network for solving Sylvester equation with time-varying coefficients. IEEE Trans Neural Netw 13:1053–1063

    Google Scholar 

  27. Xiao L (2016) A new design formula exploited for accelerating Zhang neural network and its application to time-varying matrix inversion. Theor Comput Sci 647:50–58

    MathSciNet  MATH  Google Scholar 

  28. Zhang Y, Yi C (2011) Zhang neural networks and neural-dynamic method. Nova Science, New York

    Google Scholar 

  29. Zhang Y, Zhang Y, Chen D, Xiao Z, Yan X (2017) From Davidenko method to Zhang dynamics for nonlinear equation systems solving. IEEE Trans Syst Man Cybern Syst 47:2817–2830

    Google Scholar 

  30. Zhang Y, Jin L, Guo D, Yin Y, Chou Y (2014) Taylor-type 1-step-ahead numerical differentiation rule for first-order derivative approximation and ZNN discretization. J Comput Appl Math 273:29–40

    MathSciNet  MATH  Google Scholar 

  31. Hu C, Kang X, Zhang Y (2018) Three-step general discrete-time Zhang neural network design and application to time-variant matrix inversion. Neurocomputing 306:108–118

    Google Scholar 

  32. Li J, Zhang Y, Li S, Mao M (2018) New discretization formula based zeroing dynamics for real-time tracking control of serial and parallel manipulators. IEEE Trans Ind Inform 14:3416–3425

    Google Scholar 

  33. Guo D, Nie Z, Yan L (2017) Novel discrete-time Zhang neural network for time-varying matrix inversion. IEEE Trans Syst Man Cybern Syst 47:2301–2310

    Google Scholar 

  34. Qiu B, Zhang Y (2019) Two new discrete-time neurodynamic algorithms applied to online future matrix inversion with nonsingular or sometimes-singular coefficient. IEEE Trans Cybern 49:2032–2045

    Google Scholar 

  35. Petković MD, Stanimirović PS, Katsikis VN (2018) Modified discrete iterations for computing the inverse and pseudoinverse of the time-varying matrix. Neurocomputing 289:155–165

    Google Scholar 

  36. Brewer JW (1978) Kronecker products and matrix calculus in system theory. IEEE Trans Circuits Syst 25:772–781

    MathSciNet  MATH  Google Scholar 

  37. Griffiths DF, Higham DJ (2010) Numerical methods for ordinary differential equations: initial value problems. Springer, Berlin

    MATH  Google Scholar 

  38. Suli E, Mayers DF (2003) An introduction to numerical analysis. Cambridge University Press, Oxford

    MATH  Google Scholar 

  39. Mathews JH, Fink KD (2004) Numerical methods using MATLAB, 4th edn. Prentice Hall, Upper Saddle River

    Google Scholar 

  40. Ogata K (2001) Modern control engineering, 4th edn. Prentice Hall, Upper Saddle River

    MATH  Google Scholar 

  41. Jin L, Zhang Y, Li S, Zhang Y (2017) Noise-tolerant ZNN models for solving time-varying zero-finding problems: a control-theoretic approach. IEEE Trans Autom Control 62:992–997

    MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

This work is supported by the National Natural Science Foundation of China (with No. 61772571).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiangui Kang.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

The lemma of transforming \(\gamma \) into \(\omega \) is given as follows.

Lemma 1

For transformation \(\gamma =(\omega +1)/(\omega -1)\) with \(\gamma \) and \(\omega \) being complex numbers, \(|\gamma |<1\) is satisfied if and only if \(\text{ Re }(\omega )<0\) where \(|\gamma |\) and \(\text{ Re }(\omega )\) indicate the absolute value of \(\gamma \) and the real part of \(\omega \), respectively.

Proof

Let \(\omega =\eta +\xi j\), where \(\eta \) and \(\xi \) are real numbers, and j is the imaginary unit. Thus, \(\gamma \) is given by

$$\begin{aligned} \gamma =\frac{\eta +1+\xi j}{\eta -1+\xi j}. \end{aligned}$$
(28)

Firstly, if \(\eta =\text{ Re }(\omega )<0\), then

$$\begin{aligned} (\eta +1)^2+\xi ^2<(\eta -1)^2+\xi ^2. \end{aligned}$$
(29)

According to (29), we can obtain

$$\begin{aligned} |\eta +1+\xi j|<|\eta -1+\xi j|. \end{aligned}$$
(30)

Therefore, \(|\gamma |<1\).

Secondly, if \(|\gamma |<1\), then (30) can be obtained, of which the solution is \(\eta <0\).

Finally, it is concluded that \(|\gamma |<1\) is satisfied for transformation \(\gamma =(\omega +1)/(\omega -1)\) if and only if \(\text{ Re }(\omega )<0\). Thus, the proof is completed. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hu, C., Zhang, Y. & Kang, X. General and Improved Five-Step Discrete-Time Zeroing Neural Dynamics Solving Linear Time-Varying Matrix Equation with Unknown Transpose. Neural Process Lett 51, 1715–1730 (2020). https://doi.org/10.1007/s11063-019-10181-y

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-019-10181-y

Keywords

Navigation