Abstract
In this paper, by using integral-type error function and twice zeroing neural-dynamics (or termed, Zhang neural-dynamics, ZND) formula, continuous-time advanced zeroing neural-dynamics (CT-AZND) model is proposed for solving the continuous time-variant nonlinear optimization problem. Furthermore, a discrete-time advanced zeroing neural-dynamics (DT-AZND) model is first proposed, analyzed, and investigated for solving the discrete time-variant nonlinear optimization (DTVNO) problem. Theoretical analyses show that the proposed DT-AZND model is convergent, and its steady-state residual error has an \(O(g^3)\) pattern with g denoting the sampling gap. In addition, in the presence of various kinds of noises, the proposed DT-AZND model possesses advantaged performance. In detail, the proposed DT-AZND model converges toward the time-variant theoretical solution of the DTVNO problem with \(O(g^3)\) residual error in the presence of an arbitrary constant noise and has excellent ability to suppress linear-form time-variant noise and bounded random noise. Illustrative numerical experiments further substantiate the efficacy and advantage of the proposed DT-AZND model for solving the DTVNO problem.
Similar content being viewed by others
References
Cabessa J, Villa A (2016) Expressive power of first-order recurrent neural networks determined by their attractor dynamics. J Comput Syst Sci 82(8):1232–1250
Cafieri S, Monies F, Mongeau M, Bes C (2016) Plunge milling time optimization via mixed-integer nonlinear programming. Comput Ind Eng 98:434–445
Chandra R (2014) Memetic cooperative coevolution of Elman recurrent neural networks. Soft Comput 18:1549–1559
Chandra R, Frean M, Zhang M (2012) Adapting modularity during learning in cooperative co-evolutionary recurrent neural networks. Soft Comput 16:1009–1020
Gabor O, Koreanschi A, Botez R (2016) A new non-linear vortex lattice method: applications to wing aerodynamic optimizations. Chin J Aeronaut 29(5):1178–1195
Gardeux V, Chelouah R, Siarry P, Glover F (2011) EM323: a line search based algorithm for solving high-dimensional continuous non-linear optimization problems. Soft Comput 15:2275–2285
Ghezavati V, Nia NS (2015) Development of an optimization model for product returns using genetic algorithms and simulated annealing. Soft Comput 19:3055–3069
Griffiths DF, Higham DJ (2010) Numerical methods for ordinary differential equations: initial value problems. Springer, London
Jin L, Zhang Y (2015) Discrete-time Zhang neural network for online time-varying nonlinear optimization with application to manipulator motion generation. IEEE Trans Neural Networks Learn Sys 26:1525–1531
Jin L, Zhang Y, Li S (2016) Integration-enhanced Zhang neural network for real-time-varying matrix inversion in the presence of various kinds of noises. IEEE Trans Neural Networks Learn Sys 27:2615–2627
Kaveh A, Zolghadr A (2013) Topology optimization of trusses considering static and dynamic constraints using the CSS. Appl Soft Comput 13:2727–2734
Li S, Chen S, Liu B, Li Y, Liang Y (2012) Decentralized kinematic control of a class of collaborative redundant manipulators via recurrent neural networks. Neurocomputing 91:1–10
Li S, Chen S, Liu B (2013) Accelerating a recurrent neural network to finite-time convergence for solving time-varying Sylvester equation by using a sign-bi-power activation function. Neural Process Lett 37(2):189–205
Liao L, Qi H, Qi L (2004) Neurodynamical optimization. J Global Optim 28(2):175–195
Liu C, Gong Z, Teo K, Feng E (2016) Multi-objective optimization of nonlinear switched time-delay systems in fed-batch process. Appl Math Model 40:10533–10548
Mao M, Li J, Jin L, Li S, Zhang Y (2016) Enhanced discrete-time Zhang neural network for time-variant matrix inversion in the presence of bias noises. Neurocomputing 207:220–230
Miao P, Shen Y, Li Y, Bao L (2016) Finite-time recurrent neural networks for solving nonlinear optimization problems and their application. Neurocomputing 177:120–129
Pelap F, Dongo P, Kapim A (2016) Optimization of the characteristics of the PV cells using nonlinear electronic components. Sustainable Energy Technol Assess 16:84–92
Pardo D, Moller L, Neunert M, Winkler A, Buchli J (2016) Evaluating direct transcription and nonlinear optimization methods for robot motion planning. IEEE Robot Autom 1(2):946–953
Ravazzi C, Fosson S, Magli E (2016) Randomized algorithms for distributed nonlinear optimization under sparsity constraints. IEEE Trans Signal Process 64(6):1420–1434
Su Z, Wang H, Yao P (2016) A hybrid backtracking search optimization algorithm for nonlinear optimal control problems with complex dynamic constraints. Neurocomputing 186:182–194
Sun Y, Preindl M, Sirouspour S, Emadi A (2016) Unified wide-speed sensorless scheme using nonlinear optimization for IPMSM drives. IEEE Trans Power Electron 32(8):6308–6322
Thomas J, Mahapatra SS (2016) Improved simple optimization (SOPT) algorithm for unconstrained non-linear optimization problems. Perspectives Sci 8:159–161
Vieira PF, Vieira SM, Gomes MI, Barbosa-Póvoa AP, Sousa JMC (2015) Designing closed-loop supply chains with nonlinear dimensioning factors using ant colony optimization. Soft Comput 19:2245–2264
Wai R, Liu C, Lin Y (2011) Robust path tracking control of mobile robot via dynamic petri recurrent fuzzy neural network. Soft Comput 15:743–767
Walther A, Biegler L (2016) On an inexact trust-region SQP-filter method for constrained nonlinear optimization. Comput Optim Appl 63:613–638
Wei Q, Liu D, Xu Y (2016) Neuro-optimal tracking control for a class of discrete-time nonlinear systems via generalized value iteration adaptive dynamic programming approach. Soft Comput 20:697–706
Xu D, Li Z, Wu W (2010) Convergence of gradient method for a fully recurrent neural network. Soft Comput 14:245–250
Zhang Y, Fang Y, Liao B, Qiao T, Tan H (2015) New DTZNN model for future minimization with cube steady-state error pattern using Taylor finite-difference formula. In: Proceedings of 6th international conference on intelligent control and information processing, pp 128–133
Zhang Y, Ke Z, Xu P, Yi C (2010) Time-varying square roots finding via Zhang dynamics versus gradient dynamics and the former’s link and new explanation to Newton-Raphson iteration. Inf Process Lett 110:1103–1109
Zhang Y, Xiao L, Xiao Z, Mao M (2015) Zeroing dynamics, gradient dynamics, and newton iterations. CRC Press, Boca Raton
Zhang Y, Yi C (2011) Zhang neural networks and neural-dynamic method. Nova Science Publishers, New York
Zhang Y, Yi C, Ma W (2008) Comparison on gradient-based neural dynamics and Zhang neural dynamics for online solution of nonlinear equations. Lect Notes Comput Sci 5370:269–279
Zhang Y, Zhang Y, Chen D, Xiao Z, Yan X (2017) Division by zero, pseudo-division by zero, Zhang dynamics method and Zhang-gradient method about control singularity conquering. Int J Syst Sci 48(1):1–12
Zhang K, Zhang X, Ni W, Zhang L, Yao J, Li L, Yan X (2016) Nonlinear constrained production optimization based on augmented Lagrangian function and stochastic gradient. J Pet Sci Eng 146:418–431
Zhang Z, Zhang Y (2013) Design and experimentation of acceleration-level drift-free scheme aided by two recurrent neural networks. IET Control Theory Appl 7:25–42
Zhong J, Tian L, Varma P, Waller L (2016) Nonlinear optimization algorithm for partially coherent phase retrieval and source recovery. IEEE Trans Comput Imaging 2(3):310–322
Acknowledgements
This work is supported by the National Natural Science Foundation of China (with number 61473323), by the Foundation of Key Laboratory of Autonomous Systems and Networked Control, Ministry of Education, China (with number 2013A07), and also by the Science and Technology Program of Guangzhou, China (with number 2014 J4100057). Besides, kindly note that both authors of the paper are jointly of the first authorship.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
All authors declare that they have no conflict of interest.
Ethical approval
This article does not contain any studies with human participants or animals performed by any of the authors.
Additional information
Communicated by A. Di Nola.
Appendices
Appendix A
In terms of the constant noise \(\mathbf{n }(t)=\mathbf c \in \mathbb R^m\), let us use the Laplace transformation to the jth subsystem of the noise-disturbed CT-AZND model (9). Then, we have
where \(\mu > 0\), which can be concluded that the subsystem is stable. Using the final value theorem of Laplace transformation, we have the following equation:
Thus, it can be concluded that \({\lim _{t \rightarrow \infty }}{\left\| {\mathbf{{z}}(t)} \right\| _\mathrm{{2}}} = 0\). That is, the noise-disturbed CT-AZND model (9) converges toward theoretical solution of CTVNO (2) with steady-state residual error \({\lim _{t \rightarrow \infty }}{\left\| {\mathbf{{z}}(t)} \right\| _\mathrm{{2}}} = 0\). The proof is thus completed.
Appendix B
In terms of the linear-form time-variant noise \(\mathbf{n }(t) = \alpha t + \beta \in \mathbb R^m \), let us use Laplace transformation to the jth subsystem of noise-disturbed CT-AZND model (9), and we have
where \(\mu > 0\). Therefore, we know that the subsystem is stable. Using the final value theorem of Laplace transformation, we have the following equation:
Thus, it is concluded that \({\lim _{t \rightarrow \infty }}{\left\| {\mathbf{{z}}(t)} \right\| _\mathrm{{2}}}= \left\| \alpha \right\| _\mathrm{{2}}/\mu ^2\). Therefore, it is concluded that the noise-disturbed CT-AZND model (9) converges toward theoretical solution of CTVNO (2) with the upper bound of steady-state residual error \({\lim _{t \rightarrow \infty }}{\left\| {\mathbf{{z}}(t)} \right\| _\mathrm{{2}}}\) being \(\left\| \alpha \right\| _\mathrm{{2}}/\mu ^2\). The proof is thus completed.
Appendix C
In terms of the bounded random noise \(\mathbf{n }(t) = {\varpi (t)} \in \mathbb R^m \), the jth subsystem can be reformulated as
we can obtain the solution of the above equation as
Based on Theorem 1 in Zhang and Zhang (2013), we know that there exist \(\kappa >0\) and \(\gamma >0\) such that
Thus, we obtain
We further have
Finally, we have
From the above analysis, it is concluded that the steady-state residual error \({\lim _{t \rightarrow \infty }}{\left\| {\mathbf{{z}}(t)} \right\| _\mathrm{{2}}}\) of noise-disturbed CT-AZND model (9) is bounded in the presence of bounded random noise \(\varpi (t)\). In addition, the steady-state residual error \({\lim _{t \rightarrow \infty }}{\left\| {\mathbf{{z}}(t)} \right\| _\mathrm{{2}}}\) of noise-disturbed CT-AZND model (9) is bounded by
The proof is thus completed.
Appendix D
First of all, let \(x_{k+i}\) denote \(x\left( {(k + i)g} \right) \), and the following equation can be derived based on the Taylor expansion:
where \({c_1}\) lies between kg and \((k + 1)g\). Similarly, the following two equations can be obtained:
and
with \({c_2}\) and \({c_3}\) lying in the gap \(((k - 1)g, kg)\) and \(((k - 2)g, kg)\), respectively. Let (24) multiply 3, let (4) multiply \(-1\), and let (26) multiply \(-1/2\). Then, add together these results. The following equation is thus obtained:
which can be rewritten as
with \(O(g^2)\) absorbing the three terms \((1/10){g^2}{x^{(3)}}({c_1})\),
\((1/30){g^2}{x^{(3)}}({c_2})\) and \((2/15){g^2}{x^{(3)}}({c_3})\). Computationally, the 4-point 1-step-ahead finite difference formula is proposed as
which has a truncation error of \(O( {{g^2}} )\).
Rights and permissions
About this article
Cite this article
Shi, Y., Zhang, Y. Discrete time-variant nonlinear optimization and system solving via integral-type error function and twice ZND formula with noises suppressed. Soft Comput 22, 7129–7141 (2018). https://doi.org/10.1007/s00500-018-3020-5
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00500-018-3020-5