Skip to main content
Log in

New zeroing neural dynamics models for diagonalization of symmetric matrix stream

  • Original Paper
  • Published:
Numerical Algorithms Aims and scope Submit manuscript

Abstract

In this paper, the problem of diagonalizing a symmetric matrix stream (or say, time-varying matrix) is investigated. To fulfill our goal of diagonalization, two error functions are constructed. By making the error functions converge to zero with zeroing neural dynamics (ZND) design formulas, a continuous ZND model is established and its effectiveness is then substantiated by simulative results. Furthermore, a Zhang et al. discretization (ZeaD) formula with high precision is developed to discretize the continuous ZND model. Thus, a new 5-point discrete ZND (DZND) model is further proposed for diagonalization of matrix stream. Theoretical analyses prove the stability and convergence of the 5-point DZND model. In addition, simulative experiments are carried out, of which the results substantiate not only the efficacy of the proposed 5-point DZND model but also its higher computational precision as compared with the conventional Euler-type and 4-point DZND models for diagonalization of symmetric matrix stream.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. Parlett, G.N.: The Symmetric Eigenvalue Problem. Prentice Hall, New Jersey (1998)

    Book  Google Scholar 

  2. Mayer, C.D.: Matrix Analysis and Applied Linear Algebra. Society for Industrial and Applied Mathematics, Philadelphia (2000)

  3. Golub, G.H., Van Loan, C.F.: Matrix Computations. The Johns Hopkins University Press, Baltimore (2013)

    MATH  Google Scholar 

  4. Jarlebring, E., Koskela, A., Mele, G.: Disguised and new quasi-Newton methods for nonlinear eigenvalue problems. Numer. Algorithms 79, 311–335 (2018)

    Article  MathSciNet  Google Scholar 

  5. Zhao, K., Cheng, L., Li, S.: A new updating method for the damped mass-spring systems. Appl. Math. Model. 62, 119–133 (2018)

    Article  MathSciNet  Google Scholar 

  6. Yang, P., Shang, P.: Recurrence quantity analysis based on matrix eigenvalues. Commun. Nonlinear. Sci. 59, 15–29 (2018)

    Article  MathSciNet  Google Scholar 

  7. Lee, Z., Hambach, R., Kaiser, U., Rose, H.: Significance of matrix diagonalization in modelling inelastic electron scattering. Ultramicroscopy 175, 58–66 (2017)

    Article  Google Scholar 

  8. Al-Bahrani, L.T., Patra, J.C.: A novel orthogonal PSO algorithm based on orthogonal diagonalization. Swarm Evol. Comput. 40, 1–23 (2018)

    Article  Google Scholar 

  9. Chen, K., Yi, C.: Robustness analysis of a hybrid of recursive neural dynamics for online matrix inversion. Appl. Math. Comput. 273, 969–975 (2016)

    MathSciNet  MATH  Google Scholar 

  10. Xiao, L.: A finite-time convergent Zhang neural network and its application to real-time matrix square root finding. Neural. Comput. Appl. 10, 1–8 (2017)

    Google Scholar 

  11. Guo, D., Nie, Z., Yan, L.: Novel discrete-time Zhang neural network for time-varying matrix inversion. IEEE Trans. Syst., Man, Cybern. Syst. 47, 2301–2310 (2017)

    Article  Google Scholar 

  12. Xiao, L., Liao, B., Li, S., Chen, K.: Nonlinear recurrent neural networks for finite-time solution of general time-varying linear matrix equations. Neural Netw. 98, 103–113 (2018)

    Article  Google Scholar 

  13. Zhang, Y., Qi, Z., Li, J., Qiu, B., Yang, M.: Stepsize domain confirmation and optimum of ZeaD formula for future optimization. Numer. Algorithms 81, 561–573 (2019)

    Article  MathSciNet  Google Scholar 

  14. Xiao, L., Li, S., Yang, J., Zhang, Z.: A new recurrent neural network with noise-tolerance and finite-time convergence for dynamic quadratic minimization. Neurocomputing 285, 125–132 (2018)

    Article  Google Scholar 

  15. Jin, L., Li, S., Hu, B.: RNN models for dynamic matrix inversion: a control-theoretical perspective. IEEE Trans. Ind. Inform. 14, 189–199 (2018)

    Article  Google Scholar 

  16. Petkovic, M.D., Stanimirovic, P.S., Katsilis, V.N.: Modified discrete iterations for computing the inverse and pseudoinverse of the time-varying matrix. Neurocomputing 289, 155–165 (2018)

    Article  Google Scholar 

  17. Baumann, M., Helmke, U.: Diagonalization of time-varying symmetric matrices. In: Proceedings of International Conference on Computational Science (ICCS), Amsterdam, Netherlands, pp. 419–428 (2002)

  18. Zhang, Y., Yi, C.: Zhang Neural Networks and Neural-Dynamic Method. Nova Science Publishers, New York (2011)

    Google Scholar 

  19. Zhang, Y., Xiao, L., Xiao, Z., Mao, M.: Zeroing Dynamics, Gradient Dynamics, and Newton Iterations. CRC Press, Boca Raton (2015)

    MATH  Google Scholar 

  20. Qiu, B., Zhang, Y., Yang, Z.: Analysis, verification and comparison on feedback-aided Ma equivalence and Zhang equivalency of minimum-kinetic-energy type for kinematic control of redundant robot manipulators. Asian J. Control 20, 2154–2170 (2018)

    Article  MathSciNet  Google Scholar 

  21. Li, J., Mao, M., Uhlig, F., Zhang, Y.: Z-type neural-dynamics for time-varying nonlinear optimization under a linear equality constraint with robot application. J. Comput. Appl. Math. 327, 155–166 (2018)

    Article  MathSciNet  Google Scholar 

  22. Xiao, L., Liao, B., Li, S., Zhang, Z., Ding, L., Jin, L.: Design and analysis of FTZNN applied to the real-time solution of a nonstationary Lyapunov equation and tracking control of a wheeled mobile manipulator. IEEE Trans. Ind. Inform. 14, 98–105 (2018)

    Article  Google Scholar 

  23. Qiao, S., Wang, X., Wei, Y.: Two finite-time convergent Zhang neural network models for time-varying complex matrix Drazin inverse. Linear Algebra Appl. 542, 101–117 (2018)

    Article  MathSciNet  Google Scholar 

  24. Jin, L., Zhang, Y.: Continuous and discrete Zhang dynamics for real-time varying nonlinear optimization. Numer. Algorithms 73, 115–140 (2016)

    Article  MathSciNet  Google Scholar 

  25. Guo, D., Lin, X., Su, Z., Sun, S., Huang, Z.: Design and analysis of two discrete-time ZD algorithms for time-varying nonlinear minimization. Numer. Algorithms 77, 23–36 (2018)

    Article  MathSciNet  Google Scholar 

  26. Zhang, Y., Chou, Y., Zhang, Z., Xiao, L.: Presentation, error analysis and numerical experiments on a group of 1-step-ahead numerical differentiation formulas. J. Comput. Appl. Math. 239, 406–414 (2013)

    Article  MathSciNet  Google Scholar 

  27. Jin, L., Zhang, Y.: Discrete-time Zhang neural network for online time-varying nonlilnear optimazation with application to manupulator motion generation. IEEE Trans. Neural Netw. Learn. Syst. 26, 1525–1531 (2015)

    Article  MathSciNet  Google Scholar 

  28. Mathews, J.H., Fink, K.D.: Numerical Methods Using MATLAB. Prentice Hall, New Jersey (2004)

    Google Scholar 

  29. Jin, L., Zhang, Y.: Discrete-time Zhang neural network of O(τ3) pattern for time-varying matrix pseudoinversion with application to manipulator motion generation. Neurocomputing 142, 165–173 (2014)

    Article  Google Scholar 

  30. Zhang, Y., Yang, M., Li, J., He, L., Wu, S.: ZFD formula 4Ig SFD_Y applied to future minimization. Phys. Lett. A 381, 1677–1681 (2017)

    Article  MathSciNet  Google Scholar 

  31. Shi, Y., Qiu, B., Chen, D., Li, J., Zhang, Y.: Proposing and validation of a new four-point finite-difference formula with manipulator application. IEEE Trans. Ind. Inform. 14, 1323–1333 (2018)

    Article  Google Scholar 

  32. Griffiths, D.F., Higham, D.J.: Numerical Methods for Ordinary Differential Equations: Initial Value Problems. Springer, London (2010)

    Book  Google Scholar 

  33. Suli, E., Mayers, D.F.: An Introduction to Numerical Analysis. Cambridge University Press, Cambridge (2003)

    Book  Google Scholar 

Download references

Funding

This work is supported by the National Natural Science Foundation of China (with number 61976230), by the China Postdoctoral Science Foundation (with number 2018M643306), by the Guangdong Basic and Applied Basic Research Foundation (with number 2019A1515012128), by the Fundamental Research Funds for the Central Universities (with number 19lgpy227), and also by the Shenzhen Science and Technology Plan Project (with number JCYJ20170818154936083).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Binbin Qiu.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix 1

Suppose that

$$ D = \left[ {\begin{array}{*{20}{c}} {{d_{11}}}&0& {\cdots} &0\\ 0&{{d_{22}}}& {\cdots} &0\\ {\vdots} & {\vdots} & {\ddots} & {\vdots} \\ 0&0& {\cdots} &{{d_{nn}}} \end{array}} \right] \in {\mathbb{R}^{{n} \times n}} $$

and

$$ Y = \left[ {\begin{array}{*{20}{c}} {{y_{11}}}&{{y_{12}}}& {\cdots} &{{y_{1n}}}\\ {{y_{21}}}&{{y_{22}}}& {\cdots} &{{y_{2n}}}\\ {\vdots} & {\vdots} & {\ddots} & {\vdots} \\ {{y_{n1}}}&{{y_{n2}}}& {\cdots} &{{y_{nn}}} \end{array}} \right] \in {\mathbb{R}^{{n} \times n}}. $$

Let dij and yij denote the ij th elements of D and Y, respectively. We have

$$ \begin{array}{@{}rcl@{}} E &=& DY - YD \\ &=& \left[ {\begin{array}{*{20}{c}} 0&{{y_{12}}({d_{11}} - {d_{22}})}& {\cdots} &{{y_{1n}}({d_{11}} - {d_{nn}})}\\ {{y_{21}}({d_{22}} - {d_{11}})}&0& {\cdots} &{{y_{2n}}({d_{22}} - {d_{nn}})}\\ {\vdots} & {\vdots} & {\ddots} & {\vdots} \\ {{y_{n1}}({d_{nn}} - {d_{11}})}&{{y_{n2}}({d_{nn}} - {d_{22}})}& {\cdots} &0 \end{array}} \right]. \end{array} $$

Then, the ij th element of matrix E equals yij(diidjj) with ij and i,j = 1, 2,…,n. Since the diagonal elements of D are distinct, yij (with ij and i,j = 1, 2,…,n) must equal zero so that yij(diidjj) equals zero. Thus, matrix Y is diagonal. The proof is completed.

Appendix 2

Given an N-step method \(\sum \nolimits _{i= 0}^{N} {\alpha _{i} x_{k +i}}=\tau \sum \nolimits _{i = 0}^{N} \beta _{i} \psi _{k +i}\) with its first and second characteristic polynomials being \(P_{N}(\varsigma ) = \sum \nolimits _{i = 0}^{N} \alpha _{i} \varsigma ^{i}\) and \(p_{N}(\varsigma ) = \sum \nolimits _{i = 0}^{N} \beta _{i} \varsigma ^{i}\), we have the following definition and results [32, 33] as the basis of DZND research.

Definition of root condition: :

A polynomial ρ(ς) satisfies the root condition if all its roots satisfy |ς|≤ 1 and any roots that satisfy |ς| = 1 are simple.

Result 1: :

An N-step method is said to be zero-stable if the first characteristic polynomial satisfies the root condition.

Result 2: :

An N-step method is consistent if its first and second characteristic polynomials satisfy PN(1) = 0 and \(P^{\prime }_{N}(1)=p_{N}(1)\neq 0\). The N-step method is consistent of order q if the truncation error for the exact solution is of order O(τq).

Result 3: :

An N-step method is convergent if and only if the method is zero-stable and consistent. That is, zero-stability plus consistency means convergence, which is also known as Dahlquist equivalence theorem.

Result 4: :

A zero-stable consistent N-step method converges with the order of its truncation error.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, Y., Huang, H., Yang, M. et al. New zeroing neural dynamics models for diagonalization of symmetric matrix stream. Numer Algor 85, 849–866 (2020). https://doi.org/10.1007/s11075-019-00840-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11075-019-00840-5

Keywords

Navigation