Skip to main content
Log in

Discrete time-variant nonlinear optimization and system solving via integral-type error function and twice ZND formula with noises suppressed

  • Foundations
  • Published:
Soft Computing Aims and scope Submit manuscript

Abstract

In this paper, by using integral-type error function and twice zeroing neural-dynamics (or termed, Zhang neural-dynamics, ZND) formula, continuous-time advanced zeroing neural-dynamics (CT-AZND) model is proposed for solving the continuous time-variant nonlinear optimization problem. Furthermore, a discrete-time advanced zeroing neural-dynamics (DT-AZND) model is first proposed, analyzed, and investigated for solving the discrete time-variant nonlinear optimization (DTVNO) problem. Theoretical analyses show that the proposed DT-AZND model is convergent, and its steady-state residual error has an \(O(g^3)\) pattern with g denoting the sampling gap. In addition, in the presence of various kinds of noises, the proposed DT-AZND model possesses advantaged performance. In detail, the proposed DT-AZND model converges toward the time-variant theoretical solution of the DTVNO problem with \(O(g^3)\) residual error in the presence of an arbitrary constant noise and has excellent ability to suppress linear-form time-variant noise and bounded random noise. Illustrative numerical experiments further substantiate the efficacy and advantage of the proposed DT-AZND model for solving the DTVNO problem.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  • Cabessa J, Villa A (2016) Expressive power of first-order recurrent neural networks determined by their attractor dynamics. J Comput Syst Sci 82(8):1232–1250

    Article  MathSciNet  Google Scholar 

  • Cafieri S, Monies F, Mongeau M, Bes C (2016) Plunge milling time optimization via mixed-integer nonlinear programming. Comput Ind Eng 98:434–445

    Article  Google Scholar 

  • Chandra R (2014) Memetic cooperative coevolution of Elman recurrent neural networks. Soft Comput 18:1549–1559

    Article  Google Scholar 

  • Chandra R, Frean M, Zhang M (2012) Adapting modularity during learning in cooperative co-evolutionary recurrent neural networks. Soft Comput 16:1009–1020

    Article  Google Scholar 

  • Gabor O, Koreanschi A, Botez R (2016) A new non-linear vortex lattice method: applications to wing aerodynamic optimizations. Chin J Aeronaut 29(5):1178–1195

    Article  Google Scholar 

  • Gardeux V, Chelouah R, Siarry P, Glover F (2011) EM323: a line search based algorithm for solving high-dimensional continuous non-linear optimization problems. Soft Comput 15:2275–2285

    Article  Google Scholar 

  • Ghezavati V, Nia NS (2015) Development of an optimization model for product returns using genetic algorithms and simulated annealing. Soft Comput 19:3055–3069

    Article  Google Scholar 

  • Griffiths DF, Higham DJ (2010) Numerical methods for ordinary differential equations: initial value problems. Springer, London

    Book  Google Scholar 

  • Jin L, Zhang Y (2015) Discrete-time Zhang neural network for online time-varying nonlinear optimization with application to manipulator motion generation. IEEE Trans Neural Networks Learn Sys 26:1525–1531

    Article  MathSciNet  Google Scholar 

  • Jin L, Zhang Y, Li S (2016) Integration-enhanced Zhang neural network for real-time-varying matrix inversion in the presence of various kinds of noises. IEEE Trans Neural Networks Learn Sys 27:2615–2627

    Article  Google Scholar 

  • Kaveh A, Zolghadr A (2013) Topology optimization of trusses considering static and dynamic constraints using the CSS. Appl Soft Comput 13:2727–2734

    Article  Google Scholar 

  • Li S, Chen S, Liu B, Li Y, Liang Y (2012) Decentralized kinematic control of a class of collaborative redundant manipulators via recurrent neural networks. Neurocomputing 91:1–10

    Article  Google Scholar 

  • Li S, Chen S, Liu B (2013) Accelerating a recurrent neural network to finite-time convergence for solving time-varying Sylvester equation by using a sign-bi-power activation function. Neural Process Lett 37(2):189–205

    Article  Google Scholar 

  • Liao L, Qi H, Qi L (2004) Neurodynamical optimization. J Global Optim 28(2):175–195

    Article  MathSciNet  Google Scholar 

  • Liu C, Gong Z, Teo K, Feng E (2016) Multi-objective optimization of nonlinear switched time-delay systems in fed-batch process. Appl Math Model 40:10533–10548

    Article  MathSciNet  Google Scholar 

  • Mao M, Li J, Jin L, Li S, Zhang Y (2016) Enhanced discrete-time Zhang neural network for time-variant matrix inversion in the presence of bias noises. Neurocomputing 207:220–230

    Article  Google Scholar 

  • Miao P, Shen Y, Li Y, Bao L (2016) Finite-time recurrent neural networks for solving nonlinear optimization problems and their application. Neurocomputing 177:120–129

    Article  Google Scholar 

  • Pelap F, Dongo P, Kapim A (2016) Optimization of the characteristics of the PV cells using nonlinear electronic components. Sustainable Energy Technol Assess 16:84–92

    Article  Google Scholar 

  • Pardo D, Moller L, Neunert M, Winkler A, Buchli J (2016) Evaluating direct transcription and nonlinear optimization methods for robot motion planning. IEEE Robot Autom 1(2):946–953

    Article  Google Scholar 

  • Ravazzi C, Fosson S, Magli E (2016) Randomized algorithms for distributed nonlinear optimization under sparsity constraints. IEEE Trans Signal Process 64(6):1420–1434

    Article  MathSciNet  Google Scholar 

  • Su Z, Wang H, Yao P (2016) A hybrid backtracking search optimization algorithm for nonlinear optimal control problems with complex dynamic constraints. Neurocomputing 186:182–194

    Article  Google Scholar 

  • Sun Y, Preindl M, Sirouspour S, Emadi A (2016) Unified wide-speed sensorless scheme using nonlinear optimization for IPMSM drives. IEEE Trans Power Electron 32(8):6308–6322

    Article  Google Scholar 

  • Thomas J, Mahapatra SS (2016) Improved simple optimization (SOPT) algorithm for unconstrained non-linear optimization problems. Perspectives Sci 8:159–161

    Article  Google Scholar 

  • Vieira PF, Vieira SM, Gomes MI, Barbosa-Póvoa AP, Sousa JMC (2015) Designing closed-loop supply chains with nonlinear dimensioning factors using ant colony optimization. Soft Comput 19:2245–2264

    Article  Google Scholar 

  • Wai R, Liu C, Lin Y (2011) Robust path tracking control of mobile robot via dynamic petri recurrent fuzzy neural network. Soft Comput 15:743–767

    Article  Google Scholar 

  • Walther A, Biegler L (2016) On an inexact trust-region SQP-filter method for constrained nonlinear optimization. Comput Optim Appl 63:613–638

    Article  MathSciNet  Google Scholar 

  • Wei Q, Liu D, Xu Y (2016) Neuro-optimal tracking control for a class of discrete-time nonlinear systems via generalized value iteration adaptive dynamic programming approach. Soft Comput 20:697–706

    Article  Google Scholar 

  • Xu D, Li Z, Wu W (2010) Convergence of gradient method for a fully recurrent neural network. Soft Comput 14:245–250

    Article  Google Scholar 

  • Zhang Y, Fang Y, Liao B, Qiao T, Tan H (2015) New DTZNN model for future minimization with cube steady-state error pattern using Taylor finite-difference formula. In: Proceedings of 6th international conference on intelligent control and information processing, pp 128–133

  • Zhang Y, Ke Z, Xu P, Yi C (2010) Time-varying square roots finding via Zhang dynamics versus gradient dynamics and the former’s link and new explanation to Newton-Raphson iteration. Inf Process Lett 110:1103–1109

    Article  MathSciNet  Google Scholar 

  • Zhang Y, Xiao L, Xiao Z, Mao M (2015) Zeroing dynamics, gradient dynamics, and newton iterations. CRC Press, Boca Raton

    Book  Google Scholar 

  • Zhang Y, Yi C (2011) Zhang neural networks and neural-dynamic method. Nova Science Publishers, New York

    Google Scholar 

  • Zhang Y, Yi C, Ma W (2008) Comparison on gradient-based neural dynamics and Zhang neural dynamics for online solution of nonlinear equations. Lect Notes Comput Sci 5370:269–279

    Article  Google Scholar 

  • Zhang Y, Zhang Y, Chen D, Xiao Z, Yan X (2017) Division by zero, pseudo-division by zero, Zhang dynamics method and Zhang-gradient method about control singularity conquering. Int J Syst Sci 48(1):1–12

    Article  MathSciNet  Google Scholar 

  • Zhang K, Zhang X, Ni W, Zhang L, Yao J, Li L, Yan X (2016) Nonlinear constrained production optimization based on augmented Lagrangian function and stochastic gradient. J Pet Sci Eng 146:418–431

    Article  Google Scholar 

  • Zhang Z, Zhang Y (2013) Design and experimentation of acceleration-level drift-free scheme aided by two recurrent neural networks. IET Control Theory Appl 7:25–42

    Article  MathSciNet  Google Scholar 

  • Zhong J, Tian L, Varma P, Waller L (2016) Nonlinear optimization algorithm for partially coherent phase retrieval and source recovery. IEEE Trans Comput Imaging 2(3):310–322

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

This work is supported by the National Natural Science Foundation of China (with number 61473323), by the Foundation of Key Laboratory of Autonomous Systems and Networked Control, Ministry of Education, China (with number 2013A07), and also by the Science and Technology Program of Guangzhou, China (with number 2014 J4100057). Besides, kindly note that both authors of the paper are jointly of the first authorship.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yunong Zhang.

Ethics declarations

Conflict of interest

All authors declare that they have no conflict of interest.

Ethical approval

This article does not contain any studies with human participants or animals performed by any of the authors.

Additional information

Communicated by A. Di Nola.

Appendices

Appendix A

In terms of the constant noise \(\mathbf{n }(t)=\mathbf c \in \mathbb R^m\), let us use the Laplace transformation to the jth subsystem of the noise-disturbed CT-AZND model (9). Then, we have

$$\begin{aligned} {z_j}(s) = \frac{{s({z_j}(0) + {c}_j/s)}}{{{s^2} + 2s\mu + \mu ^2 }}, \end{aligned}$$

where \(\mu > 0\), which can be concluded that the subsystem is stable. Using the final value theorem of Laplace transformation, we have the following equation:

$$\begin{aligned} \mathop {\lim }\limits _{t \rightarrow \infty } {z_j}(t) = \mathop {\mathrm{{lim}}}\limits _{s \rightarrow 0} s{z_j}(s) = \mathop {\mathrm{{lim}}}\limits _{s \rightarrow 0} \frac{{{s^2}({z_j}(0) + {c_j}/s)}}{{{s^2} + 2s\mu + \mu ^2}} = 0. \end{aligned}$$

Thus, it can be concluded that \({\lim _{t \rightarrow \infty }}{\left\| {\mathbf{{z}}(t)} \right\| _\mathrm{{2}}} = 0\). That is, the noise-disturbed CT-AZND model (9) converges toward theoretical solution of CTVNO (2) with steady-state residual error \({\lim _{t \rightarrow \infty }}{\left\| {\mathbf{{z}}(t)} \right\| _\mathrm{{2}}} = 0\). The proof is thus completed.

Appendix B

In terms of the linear-form time-variant noise \(\mathbf{n }(t) = \alpha t + \beta \in \mathbb R^m \), let us use Laplace transformation to the jth subsystem of noise-disturbed CT-AZND model (9), and we have

$$\begin{aligned} {z_j}(s) = \frac{{s{z_j}(0) + {\alpha _j}/s + {\beta _j}}}{{{s^2} + 2s{\mu } + \mu ^2}}, \end{aligned}$$

where \(\mu > 0\). Therefore, we know that the subsystem is stable. Using the final value theorem of Laplace transformation, we have the following equation:

$$\begin{aligned} \mathop {\lim }\limits _{t \rightarrow \infty } {z_j}(t) = \mathop {\mathrm{{lim}}}\limits _{s \rightarrow 0} s{z_j}(s) = \mathop {\mathrm{{lim}}}\limits _{s \rightarrow 0} \frac{{{s^2}{z_j}(0) + {\alpha _j} + s{\beta _j}}}{{{s^2} + 2s{\mu } + \mu ^2}} = \frac{{{\alpha _j}}}{{\mu ^2}}. \end{aligned}$$

Thus, it is concluded that \({\lim _{t \rightarrow \infty }}{\left\| {\mathbf{{z}}(t)} \right\| _\mathrm{{2}}}= \left\| \alpha \right\| _\mathrm{{2}}/\mu ^2\). Therefore, it is concluded that the noise-disturbed CT-AZND model (9) converges toward theoretical solution of CTVNO (2) with the upper bound of steady-state residual error \({\lim _{t \rightarrow \infty }}{\left\| {\mathbf{{z}}(t)} \right\| _\mathrm{{2}}}\) being \(\left\| \alpha \right\| _\mathrm{{2}}/\mu ^2\). The proof is thus completed.

Appendix C

In terms of the bounded random noise \(\mathbf{n }(t) = {\varpi (t)} \in \mathbb R^m \), the jth subsystem can be reformulated as

$$\begin{aligned} {\dot{z}_j}(t) = - 2\mu {z_j}(t) - \mu ^2 \int _0^t {{z_j}(\sigma )} \mathrm{{d}}\sigma + {\varpi _j}(t), \nonumber \end{aligned}$$

we can obtain the solution of the above equation as

$$\begin{aligned} {z_j}(t)= & {} -{z_j}(0)t{\mu }\exp ({-\mu }t) + {z_j}(0)\exp ({-\mu }t)\\ \nonumber&+ \int _0^t {\left( {{-\mu }(t - \sigma )\exp ({-\mu }(t - \sigma ))} \right) \varpi _j (\sigma )} \mathrm{{d}}\sigma \\ \nonumber&+ \int _0^t {\exp ({-\mu }(t - \sigma ))\varpi _j (\sigma )} \mathrm{{d}}\sigma . \end{aligned}$$

Based on Theorem 1 in Zhang and Zhang (2013), we know that there exist \(\kappa >0\) and \(\gamma >0\) such that

$$\begin{aligned} \left| {{\mu }} \right| t\exp ({-\mu }t) \le \kappa \exp ( - \gamma t). \end{aligned}$$

Thus, we obtain

$$\begin{aligned} \left| {{z_j}(t)} \right|\le & {} \left| {-{z_j}(0)t{\mu }\exp ({-\mu }t) + {z_j}(0)\exp ({-\mu }t)} \right| \\ \nonumber&+ \int _0^t {\left| {\kappa \exp (-\gamma (t - \sigma ))} \right| \left| \varpi _j (\sigma )\right| } \mathrm{{d}}\sigma \\ \nonumber&+ \int _0^t {\left| {\exp ({-\mu }(t - \sigma ))} \right| \left| {\varpi _j (\sigma )} \right| } \mathrm{{d}}\sigma . \end{aligned}$$

We further have

$$\begin{aligned} \left| {{z_j}(t)} \right|\le & {} \left| {-{z_j}(0)t{\mu }\exp ({-\mu }t) + {z_j}(0)\exp ({-\mu }t)} \right| \\ \nonumber&+ \left( {\frac{\kappa }{\gamma } + \frac{1}{{{\mu }}}} \right) \mathop {\max }\limits _{0 \le \sigma \le t} \left| {{\varpi _j}(\sigma )} \right| . \end{aligned}$$

Finally, we have

$$\begin{aligned} \mathop {\lim }\limits _{t \rightarrow \infty } {\left\| {{\mathbf {z}}(t)} \right\| _{\text {2}}} \le \left( {\frac{\kappa }{\gamma } + \frac{1}{\mu }} \right) \sqrt{m} \mathop {\max }\limits _{0 \le \sigma \le t } \mathop {\max }\limits _{1 \le j \le m} \left| {{\varpi _j}(\sigma )} \right| . \end{aligned}$$

From the above analysis, it is concluded that the steady-state residual error \({\lim _{t \rightarrow \infty }}{\left\| {\mathbf{{z}}(t)} \right\| _\mathrm{{2}}}\) of noise-disturbed CT-AZND model (9) is bounded in the presence of bounded random noise \(\varpi (t)\). In addition, the steady-state residual error \({\lim _{t \rightarrow \infty }}{\left\| {\mathbf{{z}}(t)} \right\| _\mathrm{{2}}}\) of noise-disturbed CT-AZND model (9) is bounded by

$$\begin{aligned} \left( {\frac{\kappa }{\gamma } + \frac{1}{\mu }} \right) \sqrt{m} \mathop {\max }\limits _{0 \le \sigma \le t } \mathop {\max }\limits _{1 \le j \le m} \left| {{\varpi _j}(\sigma )} \right| . \end{aligned}$$

The proof is thus completed.

Appendix D

First of all, let \(x_{k+i}\) denote \(x\left( {(k + i)g} \right) \), and the following equation can be derived based on the Taylor expansion:

$$\begin{aligned} x_{k+1}= & {} x\left( {(k + 1)g} \right) \\ \nonumber= & {} x(kg) + g\dot{x}(kg) + \frac{{{g^2}}}{{2!}}\ddot{x}(kg) + \frac{{{g^3}}}{{3!}}{x^{(3)}}({c_1}), \end{aligned}$$
(24)

where \({c_1}\) lies between kg and \((k + 1)g\). Similarly, the following two equations can be obtained:

$$\begin{aligned} x_{k-1}= & {} x\left( {(k - 1)g} \right) \\ \nonumber= & {} x(kg) - g\dot{x}(kg) + \frac{{{g^2}}}{{2!}}\ddot{x}(kg) - \frac{{{g^3}}}{{3!}}{x^{(3)}}({c_2}) \end{aligned}$$
(25)

and

$$\begin{aligned} x_{k-2}= & {} x\left( {(k - 2)g} \right) = x(kg) \\ \nonumber&-\, 2g\dot{x}(kg) + \frac{{{{(2g)}^2}}}{{2!}}\ddot{x}(kg) - \frac{{{{(2g)}^3}}}{{3!}}{x^{(3)}}({c_3}), \end{aligned}$$
(26)

with \({c_2}\) and \({c_3}\) lying in the gap \(((k - 1)g, kg)\) and \(((k - 2)g, kg)\), respectively. Let (24) multiply 3, let (4) multiply \(-1\), and let (26) multiply \(-1/2\). Then, add together these results. The following equation is thus obtained:

$$\begin{aligned} {{\dot{x}}_k}= & {} \frac{3}{{5g}}{x_{k + 1}} - \frac{3}{{10g}}{x_k} - \frac{1}{{5g}}{x_{k - 1}} - \frac{1}{{10g}}{x_{k - 2}} \\ \nonumber&-\, \frac{1}{{10}}{g^2}{x^{(3)}}({c_1}) + \frac{1}{{30}}{g^2}{x^{(3)}}({c_2}) - \frac{2}{{15}}{g^2}{x^{(3)}}({c_3}), \end{aligned}$$

which can be rewritten as

$$\begin{aligned} {{\dot{x}}_k} = \frac{3}{{5g}}{x_{k + 1}} - \frac{3}{{10g}}{x_k} - \frac{1}{{5g}}{x_{k - 1}} - \frac{1}{{10g}}{x_{k - 2}} + O({g^2}) \nonumber \end{aligned}$$

with \(O(g^2)\) absorbing the three terms \((1/10){g^2}{x^{(3)}}({c_1})\),

\((1/30){g^2}{x^{(3)}}({c_2})\) and \((2/15){g^2}{x^{(3)}}({c_3})\). Computationally, the 4-point 1-step-ahead finite difference formula is proposed as

$$\begin{aligned} {{\dot{x}}_k} \approx \frac{3}{{5g}}{x_{k + 1}} - \frac{3}{{10g}}{x_k} - \frac{1}{{5g}}{x_{k - 1}} - \frac{1}{{10g}}{x_{k - 2}}, \end{aligned}$$

which has a truncation error of \(O( {{g^2}} )\).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shi, Y., Zhang, Y. Discrete time-variant nonlinear optimization and system solving via integral-type error function and twice ZND formula with noises suppressed. Soft Comput 22, 7129–7141 (2018). https://doi.org/10.1007/s00500-018-3020-5

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00500-018-3020-5

Keywords

Navigation