Skip to main content
Log in

Projection neural networks with finite-time and fixed-time convergence for sparse signal reconstruction

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

This paper considers the \(L_1\)-minimization problem for sparse signal and image reconstruction by using projection neural networks (PNNs). Firstly, a new finite-time converging projection neural network (FtPNN) is presented. Building upon FtPNN, a new fixed-time converging PNN (FxtPNN) is designed. Under the condition that the projection matrix satisfies the Restricted Isometry Property (RIP), the stability in the sense of Lyapunov and the finite-time convergence property of the proposed FtPNN are proved; then, it is proven that the proposed FxtPNN is stable and converges to the optimum solution regardless of the initial values in fixed time. Finally, simulation examples with signal and image reconstruction are carried out to show the effectiveness of our proposed two neural networks, namely FtPNN and FxtPNN.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Data availability

Data sharing not applicable to this article as no datasets were generated or analyzed during the current study.

References

  1. Donoho DL (2006) Compressed sensing. IEEE Trans Inform Theory 52(4):1289–1306

    Article  MathSciNet  Google Scholar 

  2. Candes EJ, Wakin MB (2008) An introduction to compressive sampling. IEEE Signal Process Mag 25(2):21–30

    Article  Google Scholar 

  3. Gan L (2007) Block compressed sensing of natural images. In: 15th International conference on digital signal processing, pp 403–406

  4. Zhao Y, Liao X, He X, Tang R, Deng W (2021) Smoothing inertial neurodynamic approach for sparse signal reconstruction via lP-norm minimization. Neural Netw 140:100–112

    Article  Google Scholar 

  5. Aharon M, Elad M, Bruckstein A (2006) K-SVD: an algorithm for designing overcomplete dictionaries for sparse representation. IEEE Trans Signal Process 54(11):4311–4322

    Article  Google Scholar 

  6. Zhao Y, Liao X, He X (2022) Novel projection neurodynamic approaches for constrained convex optimization. Neural Netw 150:336–349

    Article  Google Scholar 

  7. Xu J, He X, Han X, Wen H (2022) A two-layer distributed algorithm using neurodynamic system for solving \(l_1\)-minimization. IEEE Trans Circuits Syst II Exp Br. https://doi.org/10.1109/TCSII.2022.3159814

    Article  Google Scholar 

  8. Duarte MF, Davenport MA, Takhar D, Laska JN, Sun T, Kelly KF, Baraniuk RG (2008) Single-pixel imaging via compressive sampling. IEEE Signal Process Mag 25(2):83–91

    Article  Google Scholar 

  9. Adcock B, Gelb A, Song G, Sui Y (2019) Joint sparse recovery based on variances. SIAM J Sci Comput 41(1):246–268

    Article  MathSciNet  Google Scholar 

  10. Xie J, Liao A, Lei Y (2018) A new accelerated alternating minimization method for analysis sparse recovery. Signal Process 145:167–174

    Article  Google Scholar 

  11. Sant A, Leinonen M, Rao BD (2022) Block-sparse signal recovery via general total variation regularized sparse Bayesian learning. IEEE Trans Signal Process 70:1056–1071

    Article  MathSciNet  Google Scholar 

  12. Thomas TJ, Rani JS (2022) FPGA implementation of sparsity independent regularized pursuit for fast CS reconstruction. IEEE Trans Circuits Syst I Regul Pap 69(4):1617–1628

    Article  Google Scholar 

  13. Figueiredo MAT, Nowak RD, Wright SJ (2007) Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems. IEEE J Sel Top Signal Process 1(4):586–597

    Article  Google Scholar 

  14. Wang J, Kwon S, Shim B (2012) Generalized orthogonal matching pursuit. IEEE Trans Signal Process 60(12):6202–6216

    Article  MathSciNet  Google Scholar 

  15. Ji S, Xue Y, Carin L (2008) Bayesian compressive sensing. IEEE Trans Signal Process 56(6):2346–2356

    Article  MathSciNet  Google Scholar 

  16. Natarajan BK (1995) Sparse approximate solutions to linear systems. SIAM J Comput 24(2):227–234

    Article  MathSciNet  Google Scholar 

  17. Chen SS, Donoho DL, Saunders MA (1998) Atomic decomposition by basis pursuit. SIAM J Sci Comput 20(1):33–61

    Article  MathSciNet  Google Scholar 

  18. Liu Q, Wang J (2015) \(L_1\)-minimization algorithms for sparse signal reconstruction based on a projection neural network. IEEE Trans Neural Netw Learn Syst 27(3):698–707

    Article  Google Scholar 

  19. Liu Q, Zhang W, Xiong J, Xu B, Cheng L (2018) A projection-based algorithm for constrained \(L_1\)-minimization optimization with application to sparse signal reconstruction. In: 2018 Eighth international conference on information science and technology (ICIST), pp 437–443

  20. Feng R, Leung C, Constantinides AG, Zeng W (2017) Lagrange programming neural network for nondifferentiable optimization problems in sparse approximation. IEEE Trans Neural Net Learn Syst 28(10):2395–2407

    Article  MathSciNet  Google Scholar 

  21. Xu B, Liu Q (2018) Iterative projection based sparse reconstruction for face recognition. Neurocomputing 284:99–106

    Article  Google Scholar 

  22. Guo C, Yang Q (2015) A neurodynamic optimization method for recovery of compressive sensed signals with globally converged solution approximating to \(l_{0}\) minimization. IEEE Trans Neural Net Learn Syst 26(7):1363–1374

    Article  Google Scholar 

  23. Li W, Bian W, Xue X (2020) Projected neural network for a class of non-Lipschitz optimization problems with linear constraints. IEEE Trans Neural Net Learn Syst 31(9):3361–3373

    Article  MathSciNet  Google Scholar 

  24. Ma L, Bian W (2021) A simple neural network for sparse optimization with \(l_1\) regularization. IEEE Trans Netw Sci Eng 8(4):3430–3442

    Article  Google Scholar 

  25. Wang D, Zhang Z (2019) KKT condition-based smoothing recurrent neural network for nonsmooth nonconvex optimization in compressed sensing. Neural Comput Appl 31(7):2905–2920

    Article  Google Scholar 

  26. Wen H, He X, Huang T (2022) Sparse signal reconstruction via recurrent neural networks with hyperbolic tangent function. Neural Net. https://doi.org/10.1016/j.neunet.2022.05.022

    Article  Google Scholar 

  27. Che H, Wang J, Cichocki A (2022) Sparse signal reconstruction via collaborative neurodynamic optimization. Neural Net 154:255–269

    Article  Google Scholar 

  28. Li J, Che H, Liu X (2022) Circuit design and analysis of smoothed \(l_0\) norm approximation for sparse signal reconstruction. Circ Syst Signal Process 42:1–25

    Google Scholar 

  29. Zhang X, Li C, Li H (2022) Finite-time stabilization of nonlinear systems via impulsive control with state-dependent delay. J Frankl Inst 359(3):1196–1214

    Article  MathSciNet  Google Scholar 

  30. Zhang X, Li X, Cao J, Miaadi F (2018) Design of memory controllers for finite-time stabilization of delayed neural networks with uncertainty. J Frankl Inst 355(13):5394–5413

    Article  MathSciNet  Google Scholar 

  31. Li H, Li C, Huang T, Zhang W (2018) Fixed-time stabilization of impulsive Cohen-Grossberg bam neural networks. Neural Netw 98:203–211

    Article  Google Scholar 

  32. Li H, Li C, Huang T, Ouyang D (2017) Fixed-time stability and stabilization of impulsive dynamical systems. J Frankl Inst 354(18):8626–8644

    Article  MathSciNet  Google Scholar 

  33. Garg K, Panagou D (2021) Fixed-time stable gradient flows: applications to continuous-time optimization. IEEE Trans Autom Control 66(5):2002–2015

    Article  MathSciNet  Google Scholar 

  34. Ju X, Hu D, Li C, He X, Feng G (2022) A novel fixed-time converging neurodynamic approach to mixed variational inequalities and applications. IEEE Trans Cybern 52(12):12942–12953

    Article  Google Scholar 

  35. Ju X, Li C, Che H, He X, Feng G (2022) A proximal neurodynamic network with fixed-time convergence for equilibrium problems and its applications. IEEE Trans Neural Netw Learn Syst. https://doi.org/10.1109/TNNLS.2022.3144148

    Article  Google Scholar 

  36. Garg K, Baranwal M, Gupta R, Benosman M (2022) Fixed-time stable proximal dynamical system for solving MVIPs. IEEE Trans Autom Control. https://doi.org/10.1109/TAC.2022.3214795

    Article  Google Scholar 

  37. Bhat SP, Bernstein DS (2000) Finite-time stability of continuous autonomous systems. SIAM J Control Optim 38(3):751–766

    Article  MathSciNet  Google Scholar 

  38. Polyakov A (2012) Nonlinear feedback design for fixed-time stabilization of linear control systems. IEEE Trans Autom Control 57(8):2106–2110

    Article  MathSciNet  Google Scholar 

  39. Parikh N, Boyd S (2013) Proximal algorithms. Found Trends Optim 1:123–231

    Google Scholar 

  40. Candes E, Tao T (2007) The DANTZIG selector: statistical estimation when p is much larger than n. Ann Stat 35(6):2313–2351

    MathSciNet  Google Scholar 

  41. Meana H, Miyatake M, Guzm V (2004) Analysis of a wavelet-based watermarking algorithm. In: International conference on electronics, communications, and computers. IEEE computer society, Los Alamitos, CA, USA, p 283

  42. LaSalle JP (ed) (1967) An invariance principle in the theory of stability. New York, Academic, pp 277–286

    Google Scholar 

Download references

Acknowledgements

This work was founded by the National Natural Science Foundation of China under Grant Nos. 62373310, 62176218; and in part by the Graduate Student Research Innovation Project of Chongqing under Grant CYB22152.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chuandong Li.

Ethics declarations

Conflict of interest

The authors have no relevant conflicts of interest to disclose.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A

Proof of Theorem 1

The derivative of \(V_1\) with respect to t is given as follows:

$$\begin{aligned} \dot{V_1}(\tilde{\textit{u}})=(P\tilde{b} + (I-P)\tilde{c})^T \dot{\tilde{\textit{u}}}, \end{aligned}$$
(23)

\(\textit{u}^*\) is a constant, we can get

$$\begin{aligned} \dot{\tilde{\textit{u}}}=\dot{\textit{u}}= - \frac{{(I - P)g(\textit{u}) + Pw - q}}{{\left\| {(I - P)g(\textit{u}) + Pw - q} \right\| _2^{1 - \alpha }}}. \end{aligned}$$
(24)

As stated in the definition, FtPNN (11) has a equilibrium point \(\textit{u}^*\), which indicates that

$$\begin{aligned}&-\lambda (\textit{u}^*)\phi (\textit{u}^*) = - \frac{{(I - 2P)g(\textit{u}^*) + P\textit{u}^* - q}}{{\left\| {(I - 2P) + g(\textit{u}^*)P\textit{u}^*- q} \right\| _2^{1 - \alpha }}} = 0 \nonumber \\&\Rightarrow \frac{{\phi (\textit{u}^*)}}{{\left\| {\phi (\textit{u}^*)} \right\| _2^{1 - \alpha }}} = 0\quad \textrm{or}\quad \lambda (\textit{u}^*)=0 \nonumber \\&\Rightarrow \frac{{\phi (\textit{u}^*)}}{{\left\| {\phi (\textit{u}^*)} \right\| _2^{1 - \alpha }}} = 0\quad \textrm{or}\quad \textit{u}^* \in Fin(\textit{u}) \nonumber \\&\Rightarrow \phi (\textit{u}^*)=0\quad \textrm{or}\quad \textit{u}^* \in Fin(\textit{u}), \end{aligned}$$
(25)

where \(Fin(\textit{u}):= \{\textit{u} \in R^n: -Pu^* -(I - 2P)g(\textit{u}^*) + q = 0\}\).

Then, we can get it by inserting (24) into (23).

$$\begin{aligned} \dot{\tilde{\textit{u}}} = - \frac{{P\tilde{\textit{u}} + (I - 2P)\tilde{c}}}{{\left\| {P\tilde{\textit{u}} + (I - 2P)\tilde{c}} \right\| _2^{1 - \alpha }}} = 0. \end{aligned}$$
(26)

As a result of Eqs. (25) and (16), we can obtain

$$\begin{aligned} \dot{V_1}(\tilde{\textit{u}})&= -(P\tilde{b} + (I-P)\tilde{c})^T \frac{{P\tilde{\textit{u}} + (I - 2P)\tilde{c}}}{{\left\| {P\tilde{\textit{u}} + (I - 2P)\tilde{c}} \right\| _2^{1 - \alpha }}} \nonumber \\&= -(P(\tilde{\textit{u}}-\tilde{c}) + (I-P)\tilde{c})^T \frac{{P\tilde{\textit{u}} + (I - 2P)\tilde{c}}}{{\left\| {P\tilde{\textit{u}} + (I - 2P)\tilde{c}} \right\| _2^{1 - \alpha }}} \nonumber \\&= -{P\tilde{\textit{u}} + (I - 2P)\tilde{c}}^T \frac{{P\tilde{\textit{u}} + (I - 2P)\tilde{c}}}{{\left\| {P\tilde{\textit{u}} + (I - 2P)\tilde{c}} \right\| _2^{1 - \alpha }}} \nonumber \\&= -\frac{{\left\| {P\tilde{\textit{u}} + (I - 2P)\tilde{c}} \right\| _{2}^{2}}}{{\left\| {P\tilde{\textit{u}} + (I - 2P)\tilde{c}} \right\| _2^{1 - \alpha }}} \nonumber \\&= - {\left\| {P\tilde{\textit{u}} + (I - 2P)\tilde{c}} \right\| _2^{1 + \alpha }} \nonumber \\&\le 0. \end{aligned}$$
(27)

The value of \(G(\tilde{\textit{u}})\) is now discussed. According to Eq. (7), we can acquire that the function \(g(\cdot )\) is non-decreasing. For \(\tilde{\textit{u}}_i \ge 0, {z_{1i}}(s)=g(s + \textit{u}_i^*) - g(\textit{u}_i^*)\), hence we can deduce that \(\forall s \le 0, {z_{1i}}(s) \ge 0\), and consequently

$$\begin{aligned} G_i(\tilde{\textit{u}}_i) \ge 0. \end{aligned}$$

Further,

$$\begin{aligned} g(\textbf{x}) - g(\textbf{e}) \le \textbf{x} - \textbf{e},\quad \forall \textbf{x} \ge \textbf{e}, \end{aligned}$$

therefore,

$$\begin{aligned} G_i(\tilde{\textit{u}}_i) = {\int\limits _0^{\widetilde{\textit{u}}_i} {{z_{1i}}(s)ds = \int\limits _0^{\widetilde{\textit{u}}_i} {sds} = \frac{\widetilde{\textit{u}}_i^2}{2}}}. \end{aligned}$$

For \(\tilde{\textit{u}}_i \ge 0, {z_{2i}}(s)=g(\textit{u}_i^*) - g(s + \textit{u}_i^*)\), hence we can deduce that \(\forall s \le 0, {z_{2i}}(s) \ge 0\), and consequently

$$\begin{aligned} G_i(\tilde{\textit{u}}_i) \ge 0. \end{aligned}$$

Correspondingly, when \(\forall s \le 0, {z_{2i}}(s) \le -s\),

$$\begin{aligned} g(\textbf{x}) - g(\textbf{e}) \le \textbf{x} - \textbf{e},\quad \forall \textbf{x} \ge \textbf{e}. \end{aligned}$$

As a result, the following conclusion can be drawn:

$$\begin{aligned} G_i(\tilde{\textit{u}}_i) = {\int\limits _{\widetilde{\textit{u}}_i}^{0} {{z_{2i}}(s)ds = \int\limits _{\widetilde{\textit{u}}_i}^0 {-sds} = \frac{\widetilde{\textit{u}}_i^2}{2}}}. \end{aligned}$$

The following is a summary of the above findings:

$$\begin{aligned} 0 \le G_i(\tilde{\textit{u}}_i) \le \frac{\widetilde{\textit{u}}_i^2}{2},\ \forall \tilde{\textit{u}}_i. \end{aligned}$$
(28)

Using Eq. (14) and (28), it is not difficult to obtain that

$$\begin{aligned} V_1(\tilde{\textit{u}})&=\textbf{1}^{T}PH(\tilde{\textit{u}}) + \textbf{1}^{T}(1-P)G(\tilde{\textit{u}})\\&\le \frac{1}{2}\delta _{\max }(P) \Vert \tilde{\textit{u}}\Vert _2^2 + \frac{1}{2}\delta _{\max }(I-P) \Vert \tilde{\textit{u}}\Vert _2^2. \end{aligned}$$

It is said that P is nonnegative (positive), and the matrix \((I-P)\) is a positive semi-definite matrix. The maximum eigenvalue is specified as \(\delta _{\max }(\cdot )\). Define \(\gamma\) such as:

$$\begin{aligned} \gamma = \frac{1}{2} \delta _{\max }(P) + \frac{1}{2} \delta _{\max }(I-P). \end{aligned}$$

One obtains

$$\begin{aligned} V_1(\tilde{\textit{u}}) \le \gamma \Vert \tilde{\textit{u}}\Vert _2^2, \end{aligned}$$
(29)

with \(\gamma > 0\). It follows from Eq. (28) that

$$\begin{aligned} V_1(\tilde{\textit{u}}) = \textbf{1}^{T}PH(\tilde{\textit{u}}) + \textbf{1}^{T}(I-P)G(\tilde{\textit{u}}) \ge 0. \end{aligned}$$
(30)

Deduce \(V_1\), then let be a radically unbounded Lyapunov function that is positive semi-definite based on (23) and (30). We can conclude that the FtPNN (11) is Lyapunov stable based on (27) and (30). According to the LaSalle invariance principle [42], we currently recognize that \(\tilde{\textit{u}}\) will converge to an invariant subset of \(M \triangleq \, \{\tilde{\textit{u}}\ \, \vert P\tilde{b} + (I-P)\tilde{c} = 0\}\) called \(U_{inv}\). It is clear from (26) and (27) that \(\dot{V}_1 = 0\) implies \(\dot{\tilde{\textit{u}}} = 0\), hence all states of U are invariant. As a consequence, \(U_{inv} = U\). Certainly, we can get \(\textit{u}\) to converge to U, and then \(g(\textit{u})\) to converge to a collection of optimal solutions (2). Theorem 1 has also been proven.

Proof of Theorem 2

Firstly, the relation of \(\tilde{\textit{u}}\) and \(\tilde{c}\) will be proved. Under the condition 2 of Lemma 3, \(\textit{u}_i(t)\) will have the same sign as \(\textit{u}^*\), after a finite time \(t_1 < \infty\). We will consider the following two cases, respectively. (1): if \(\vert \textit{u}^*_i(t)\vert < 1\), we can get \(\vert \tilde{c}_i \vert =\vert g(\tilde{\textit{u}}_i+\textit{u}^*_i)-g(\textit{u}^*_i)\vert \le \vert \tilde{\textit{u}}_i \vert\). (2): if \(\vert \textit{u}^*_i(t)\vert > 1\), we can get \(\tilde{c}_i=g(\textit{u}_i)-g(\textit{u}^*_i) = g(\tilde{\textit{u}}_i+\textit{u}^*_i)-g(\textit{u}^*_i)\). Based on the condition 2 of Lemma 3, the presented FtPNN is globally convergent. It indicates that \(\vert \tilde{\textit{u}}_i \vert\) is tiny, for any small \(\mathfrak {p} > 0\), there is \(t(\omega ) < \infty\), such that \(\vert \tilde{\textit{u}}_i(t)\vert < \ell , \forall t>t(\ell )\). Then, \(t>t_2\), define \(t_2 = t(1-\mathfrak {p})\), we can get \(\vert \tilde{\textit{u}}_i(t)\vert < 1-\mathfrak {p}\) that \(\textit{u}^*_i\) and \(\tilde{\textit{u}}_i + \textit{u}^*_i\) have the same sign, \(\tilde{c}_i = 0\) obtained.

According to Assumption 1, P is an idempotent matrix that is also symmetric, we have \(P^2=P, P\tilde{\textit{u}} = P^T P\tilde{\textit{u}}=\tilde{\textit{u}}\), then \(P\tilde{c}=\tilde{c}\). Consequently,

$$\begin{aligned} \Vert P\tilde{\textit{u}} + (I-2P)\tilde{c}\Vert ^2_2 = \Vert \tilde{\textit{u}} + \tilde{c}-2\tilde{c}\Vert ^2_2 = \Vert \tilde{\textit{u}} -\tilde{c}\Vert ^2_2 \ge \Vert \tilde{\textit{u}}\Vert ^2_2 -\Vert \tilde{c}\Vert ^2_2. \end{aligned}$$

In the sequel, there exists a \(t_e=\max \{t_1,t_2\}<\infty\), such that \(\forall t, t>t_e\). We have, if \(\vert \tilde{\textit{u}}_i^*\vert <1\), \(\vert \tilde{c}_i\vert \le \vert \tilde{\textit{u}}_i\vert\), else if \(\vert \tilde{\textit{u}}_i^*\vert >1\), \(\tilde{c}_i = 0\). For simplicity, if \(\vert \tilde{\textit{u}}_i^*\vert <1\), we denote \(i \in \Gamma _c\), if \(\vert \tilde{\textit{u}}_i^* \vert >1\), we denote \(i \in \Gamma _b\). Therefore, \(\Vert \tilde{c}\Vert \le \Vert \tilde{\textit{u}}\Vert = \Vert \tilde{\textit{u}_{\Gamma _b}}\Vert +\Vert \tilde{\textit{u}_{\Gamma _c}}\Vert\), then \(\Vert \tilde{c}\Vert \le \Vert \tilde{\textit{u}_{\Gamma _b}}\Vert\).

Utilizing Assumption 1, \(P_{\Gamma _b}\) and \(P_{\Gamma _c}\) is not singular, thus \(\Vert P\tilde{\textit{u}} + (I-2P)\tilde{c}\Vert ^2_2 > 0\) so long as \(\Vert \tilde{\textit{u}}\Vert ^2_2>0\). Then, there is a small value \(\chi\) such that \(\Vert \tilde{c}\Vert \le \chi \Vert \tilde{\textit{u}}\Vert\), then

$$\begin{aligned} \Vert P\tilde{\textit{u}} + (I-2P)\tilde{c}\Vert ^2_2 \ge (1-\chi )\Vert \tilde{\textit{u}}\Vert ^2_2, \end{aligned}$$
(31)

where \((1-\chi ) >0\).

Next, the FtPNN (11) are considered, according to Eqs. (27) and (31), for \(t>t_e\), we have

$$\begin{aligned} \dot{V_1}(\tilde{\textit{u}})= - {\left\| {P\tilde{\textit{u}} + (I - 2P)\tilde{c}} \right\| _2^{1 + \alpha }} \le -(1-\chi )^{(1+\alpha )/2}\Vert \tilde{\textit{u}}\Vert ^{(1 + \alpha )/2}_2. \end{aligned}$$
(32)

For all \(t>t_e\), \(V_1(\tilde{\textit{u}})\) converges to zero in finite time, and \(t_f>t_e\). At last, one obtains \(\forall t>t_f, \textit{u}=\textit{u}^*\).

From (A7) and (A10), one obtains,

$$\begin{aligned} \dot{V_1}(\tilde{\textit{u}}(t)) \le -(1-\chi )^{\frac{1+\alpha }{2}}\gamma ^{-\frac{1+\alpha }{4}} V_1(\tilde{\textit{u}}(t))^{\frac{1+\alpha }{4}}. \end{aligned}$$
(33)

In view of the above,

$$\begin{aligned}&V_1(\tilde{\textit{u}}(t))^{1-\frac{1+\alpha }{4}} - V_1(\tilde{\textit{u}}(0))^{1-\frac{1+\alpha }{4}}\nonumber \\&\quad =\int \limits^t_0\left( 1-\frac{1+\alpha }{4}\right) V_1(\tilde{\textit{u}}(t))^ {-\frac{1+\alpha }{4}}\dot{V_1}(\tilde{\textit{u}}(t))\textrm{d}t\nonumber \\&\quad \le -(1-\chi )^{\frac{1+\alpha }{2}}\gamma ^{-\frac{1+\alpha }{4}} \int\limits ^t_0\left( 1-\frac{1+\alpha }{4}\right) \textrm{d}t\nonumber \\&\quad =-(1-\chi )^{\frac{1+\alpha }{2}} \gamma ^{-\frac{1+\alpha }{4}}\left( 1-\frac{1+\alpha }{4}\right) t. \end{aligned}$$
(34)

Therefore, one observes that,

$$\begin{aligned} t \le \frac{4}{(3-\alpha )(1-\chi )^{\frac{1+\alpha }{2}} \gamma ^{-\frac{1+\alpha }{4}}}V_1(\tilde{\textit{u}}(0))^{\frac{3-\alpha }{4}}. \end{aligned}$$
(35)

Thus, we can get the FtPNN (11) can converge within \(\frac{4}{(3-\alpha )(1-\chi )^{\frac{1+\alpha }{2}} \gamma ^{-\frac{1+\alpha }{4}}}V_1(\tilde{\textit{u}}(0))^{\frac{3-\alpha }{4}}\). Theorem 2 is proved. \(\square\)

Appendix B

Proof of Lemma 4

Consider the following matrix \(H(\tilde{\textit{u}}_i(t))\). According to \(\varrho (\cdot )\), it is possible to have

$$\begin{aligned} \varrho (\textbf{x})-\varrho (\textbf{e}) \le \textbf{x}-\textbf{e}, \quad \textrm{for} \quad \forall \textbf{x} \ge \textbf{e}, \end{aligned}$$

as a result,

$$\begin{aligned} \vert \tilde{c}_i(t)\vert = \vert \tilde{\alpha }_i(\tilde{\textit{u}}_i(t))\vert \le \vert \tilde{\textit{u}}_i(t)\vert , \quad \forall \tilde{\textit{u}}_i(t), \end{aligned}$$
(36)

we know that the function \(\varrho (\cdot )\) is non-decreasing and \(\alpha _i(s) = \varrho (s+\tilde{\textit{u}}_i^*)+\varrho (\tilde{\textit{u}}_i^*)\), and it indicates that \(\tilde{\textit{u}}_i(t) \cdot \alpha _i(\tilde{\textit{u}}_i(t)) \ge 0\).

If \(\tilde{\textit{u}}_i(t) \le 0, \alpha _i(\tilde{\textit{u}}_i(t)) \le 0\) and \(\tilde{\textit{u}}_i(t) \le \alpha _i(\tilde{\textit{u}}_i(t))\), one obtains

$$\begin{aligned} H_i(\tilde{\textit{u}}_i(t)) = \int\limits _0^{\widetilde{\textit{u}}_i(t)} {{\beta _{i}}(s)ds} = \int \limits_{\widetilde{\textit{u}}_i(t)}^{0} {{\beta _{i}}(s)ds} = \int\limits _0^{\widetilde{\textit{u}}_i(t)} {(-{\beta _{i}}(s)ds)}ds \le \int \limits_0^{\widetilde{\textit{u}}_i(t)}{-s}ds, \end{aligned}$$

therefore,

$$\begin{aligned} 0 \le H_i(\tilde{\textit{u}}_i(t)) \le \frac{\tilde{\textit{u}}_i^2(t)}{2}. \end{aligned}$$

Similarly, if \(\tilde{\textit{u}}_i(t) \ge 0, \alpha _i(\tilde{\textit{u}}_i(t)) \ge 0\) and \(\tilde{\textit{u}}_i(t) \ge \alpha _i(\tilde{\textit{u}}_i(t))\), we have

$$\begin{aligned} H_i(\tilde{\textit{u}}_i(t)) = \int\limits _0^{\widetilde{\textit{u}}_i(t)} {{\beta _{i}}(s)ds} \le \int \limits_0^{\widetilde{\textit{u}}_i(t)} {s}ds. \end{aligned}$$

Consequently, we can obtain the following:

$$\begin{aligned} 0 \le H_i(\tilde{\textit{u}}_i(t)) \le \frac{\tilde{\textit{u}}_i^2(t)}{2}. \end{aligned}$$

Therefore, we have \(0 \le H_i(\tilde{\textit{u}}_i(t)) \le ((\tilde{\textit{u}}_i^2(t))/(2))\), for any \(\tilde{\textit{u}}_i(t)\).

As shown in the previous demonstration, the same property on \(G(\tilde{\textit{u}}(t))\) holds, which means that the conclusion holds.

Proof of Lemma 5

Consider the boundary of \(\vert H_i(\tilde{\textit{u}}_j(t))\vert\). When \(i \ne j\), one obtains

$$\begin{aligned} \vert H_i(\tilde{\textit{u}}_j(t))\vert&= \vert \int \limits_{\bar{\textit{u}}_j}^{\widetilde{\textit{u}}_j(t)} {\alpha _{i}(\rho _{ij}(s))}ds \vert \nonumber \\&\le \int \limits_{\bar{\textit{u}}_j}^{\widetilde{\textit{u}}_j(t)} {\vert \alpha _{i}(\rho _{ij}(s))\vert }ds \nonumber \\&= \int \limits_{\bar{\textit{u}}_j}^{\widetilde{\textit{u}}_j(t)} {\vert \varrho (\rho _{ij}(s)+\textit{u}_i^*)-\varrho (\textit{u}_i^*)\vert }ds \nonumber \\&= \int\limits _{\bar{\textit{u}}_j}^{\widetilde{\textit{u}}_j(t)} {\vert \tilde{c}_i(\rho _{ij}(s))\vert }ds. \end{aligned}$$
(37)

Furthermore, according to the condition 2 of Lemma 3, \(\tilde{\textit{u}}(t)\), \(\tilde{c}(t)\), and \(\tilde{b}(t)\) are bounded, there are three constants \(\tau _1\), \(\tau _2\), and \(\tau _3\) that make the following inequations established for each \(\rho _{ij}(s)\) before the FxtMPNN (17) converges, i.e., \(\Vert \tilde{\textit{u}}\Vert _2\):

$$\begin{aligned} \vert \tilde{c}_i(\rho _{ij}(s))\vert \le \tau _1 \Vert \tilde{\textit{u}}\Vert _2\\ \vert \tilde{\textit{u}}_j(t)-\bar{\textit{u}}_j\vert \le \tau _2 \Vert \tilde{\textit{u}}\Vert _2\\ \vert \tilde{b}_i(\rho _{ij}(s))\vert \le \tau _3 \Vert \tilde{\textit{u}}\Vert _2, \end{aligned}$$

where \(\tau _1 > 0\), \(\tau _2 > 0\), and \(\tau _3 > 0\).

In addition,

$$\begin{aligned} \int \limits_{\bar{\textit{u}}_j}^{\widetilde{\textit{u}}_j(t)} {\vert \tilde{c}_i(\rho _{ij}(s))\vert }ds&\le \tau _1 \Vert \tilde{\textit{u}}\Vert _2 \int\limits _{\bar{\textit{u}}_j}^{\widetilde{\textit{u}}_j(t)} {}ds\\&= \tau _1 \Vert \tilde{\textit{u}}\Vert _2 \vert \tilde{\textit{u}}_j(t)-\bar{\textit{u}}_j\vert \\&\le \tau _1 \tau _2 \Vert \tilde{\textit{u}}\Vert _2^2, \end{aligned}$$

therefore,

$$\begin{aligned} \vert H_i(\tilde{\textit{u}}_j(t))\vert \le \tau _1 \tau _2 \Vert \tilde{\textit{u}}\Vert _2^2. \end{aligned}$$

According to Lemma 4, one obtains

$$\begin{aligned} H_i(\tilde{\textit{u}}_j(t)) \le \frac{1}{2} \tilde{\textit{u}}_2^2 \le \frac{1}{2} \Vert \tilde{\textit{u}}\Vert _2^2. \end{aligned}$$

Let \(\tau ^{'}=\max \{\tau _1 \tau _2, 1/2\}\), we have

$$\begin{aligned} \vert H_i(\tilde{\textit{u}}_j(t))\vert \le \tau ^{'} \Vert \tilde{\textit{u}}\Vert _2^2, \quad \forall i,j, \end{aligned}$$

one obtains,

$$\begin{aligned} \vert Tr\{PH\}\vert&= \vert Tr\{P^{T}PH\}\vert = \left| \sum _{i=1}^N \sum _{j=1}^N \{P_i^{T}P_{j}H_i(\widetilde{\textit{u}}_j(t))\}\right| \nonumber \\&\le \sum _{i=1}^N \sum _{j=1}^N \vert P_i^{T}P_{j}\vert \cdot \vert H_i(\widetilde{\textit{u}}_j(t))\vert \nonumber \\&\le (\zeta \tau ^{'}(N^2-N)+ \tau ^{'}N)\Vert \tilde{\textit{u}}\Vert _2^2, \end{aligned}$$
(38)

where \(\zeta\) is defined in (56).

Correspondingly, the following conclusion is reached based on the preceding discussion:

$$\begin{aligned} \vert G_i(\tilde{\textit{u}}_j(t))\vert \le \tau ^{''} \Vert \tilde{\textit{u}}\Vert _2^2, \quad \forall i,j, \end{aligned}$$

where \(\tau ^{''}=\max \{\tau _2 \tau _3, 1/2\}\), one obtains,

$$\begin{aligned} \vert Tr\{PH\}\vert \le (\zeta \tau ^{''}(N^2-N)+ \tau ^{''}N)\Vert \tilde{\textit{u}}\Vert _2^2. \end{aligned}$$
(39)

Using Lemma 4, one obtains,

$$\begin{aligned} 0 \le \vert Tr(G)\vert \le \sum _{i=1}^N{G_i(\tilde{\textit{u}}_i(t))} \le \frac{N}{2}\Vert \tilde{\textit{u}}_i(t)\Vert _2^2, \end{aligned}$$
(40)

it follows from (38) and (40) that

$$\begin{aligned} \vert V_2(\tilde{\textit{u}}_i(t))\vert&= \vert Tr\{PH(\tilde{\textit{u}}_i(t))\} + Tr\{(I-P)G(\tilde{\textit{u}}_i(t))\}\vert \nonumber \\&\le \vert Tr\{PH\}\vert + \vert Tr(G)\vert + \vert Tr(PG)\vert \nonumber \\&\le \vert Tr\{PH\}\vert + \frac{N}{2}\Vert \tilde{\textit{u}}_i(t)\Vert _2^2 + \vert Tr(PG)\vert \nonumber \\&\le ((\zeta \tau ^{'}(N^2-N)+\tau ^{'}N) + \frac{N}{2} + \tau ^{''}(N^2-N)+\tau ^{''}N)\Vert \tilde{\textit{u}}_i(t)\Vert _2^2. \end{aligned}$$
(41)

As a result, a positive constant \(\psi\) exists, one obtains \(V_2(\tilde{\textit{u}}(t)) \le \psi \Vert \tilde{\textit{u}}_i(t)\Vert _2^2\), where \(\psi = (\zeta \tau ^{'}(N^2-N)+\tau ^{'}N) + \frac{N}{2} + \tau ^{''}(N^2-N)+\tau ^{''}N\). Since \(\tau ^{'} \ge 1/2, \tau ^{''} \ge 1/2\), thus there exists \(\psi \ge 3N/2\).

Proof of Lemma 6

According to the proof of Theorem 2, the FxtPNN is Lyapunov stable. \(V_2\) is bounded, which is proved by Lemma 5. Hence, according to LaSalle theorem [42], the FxtPNN (17) converges to a invariant set L, where

$$\begin{aligned} L=\{\tilde{\textit{u}} \vert P\tilde{c}+(I-P)\tilde{b}=0\}. \end{aligned}$$

As we know, the solution of FxtPNN is unique. That is to say, the invariant set L has one unique element, i.e. \(\tilde{\textit{u}}=0\) and \(V_2(\tilde{\textit{u}})=0\). Accordingly, \(\Vert P\tilde{c}+(I-P)\tilde{b}\Vert ^2_2 \ne 0\) and \(\Vert \tilde{\textit{u}}\Vert ^2_2 \ne 0\) are tenable before the FxtPNN converges. Under this premise, there exists two constants \(0<\kappa <1\) and o, such that

$$\begin{aligned} o \Vert \tilde{\textit{u}}\Vert ^2_2 \le \Vert P\tilde{c}+(I-P)\tilde{b}\Vert ^2_2, \end{aligned}$$

where

$$\begin{aligned} o = \left\{ \begin{array}{ll} \kappa \frac{\Vert P\tilde{c}+(I-P)\tilde{b}=0\Vert ^2_2}{\Vert \tilde{\textit{u}}\Vert ^2_2}, \quad &{}\textrm{if} \ \Vert P\tilde{c}+(I-P)\tilde{b}=0\Vert ^2_2 < \Vert \tilde{\textit{u}}\Vert ^2_2,\\ \kappa , \quad &{}\textrm{if} \ \Vert P\tilde{c}+(I-P)\tilde{b}=0\Vert ^2_2 \ge \Vert \tilde{\textit{u}}\Vert ^2_2, \end{array}\right. \end{aligned}$$

with \(o \in (0,1)\).

Proof of Lemma 7

Since \(\tilde{\textit{u}} = \textit{u}(t)-\textit{u}^*\), \(\textit{u}^*\) is a constant, one obtains,

$$\begin{aligned} \dot{\tilde{\textit{u}}} = \dot{\textit{u}}&= -\frac{{P(w(t)) + (I - P)g(\textit{u}(t)) - q}}{{\left\| {P(w(t)) + (I - P)g(\textit{u}(t)) - q} \right\| _2^{1 - \alpha }}} - \frac{{P(w(t)) + (I - P)g(\textit{u}(t)) - q}}{{\left\| {P(w(t)) + (I - P)g(\textit{u}(t)) - q} \right\| _2^{1 - \beta }}} \nonumber \\&= -\frac{{P(c(t)) + (I - P)b(t) - q}}{{\left\| {P(c(t)) + (I - P)b(t) - q} \right\| _2^{1 - \alpha }}} - \frac{{P(c(t)) + (I - P)b(t) - q}}{{\left\| {P(c(t)) + (I - P)b(t) - q} \right\| _2^{1 - \beta }}}. \end{aligned}$$
(42)

From FxtPNN (17), \(\textit{u}^*\) is the equilibrium point, one obtains,

$$\begin{aligned} -\eta (\textit{u}^*)\phi (\textit{u}^*)&= - \frac{{(I - 2P)g(\textit{u}^*) + P\textit{u}^* - q}}{{\left\| {(I - 2P) + g(\textit{u}^*)P\textit{u}^*- q} \right\| _2^{1 - \alpha }}} - \frac{{(I - 2P)g(\textit{u}^*) + P\textit{u}^* - q}}{{\left\| {(I - 2P) + g(\textit{u}^*)P\textit{u}^*- q} \right\| _2^{1 - \beta }}} = 0\nonumber \\&\Rightarrow \frac{{\phi (\textit{u}^*)}}{{\left\| {\phi (\textit{u}^*)} \right\| _2^{1 - \alpha }}}+ \frac{{\phi (\textit{u}^*)}}{{\left\| {\phi (\textit{u}^*)} \right\| _2^{1 - \beta }}}= 0\quad \textrm{or}\quad \lambda (\textit{u}^*)=0 \nonumber \\&\Rightarrow \frac{{\phi (\textit{u}^*)}}{{\left\| {\phi (\textit{u}^*)} \right\| _2^{1 - \alpha }}} + \frac{{\phi (\textit{u}^*)}}{{\left\| {\phi (\textit{u}^*)} \right\| _2^{1 - \alpha }}} = 0\quad \textrm{or}\quad \textit{u}^* \in Fin(\textit{u}) \nonumber \\&\Rightarrow \phi (\textit{u}^*)=0\quad \textrm{or}\quad \textit{u}^* \in Fin(\textit{u}), \end{aligned}$$
(43)

we can get \(c^*\) and \(b^*\) are the equilibrium points, such that,

$$\begin{aligned} q = Pc^* + (I - P)b^*, \end{aligned}$$
(44)

then, combing (42) and (44), one obtains

$$\begin{aligned} \dot{\tilde{\textit{u}}}&= -\frac{{P(c - c^*) + (I - P)(b-b^*)}}{{\left\| {P(c - c^*) + (I - P)(b-b^*)} \right\| _2^{1 - \alpha }}} - \frac{{P(c - c^*) + (I - P)(b-b^*)}}{{\left\| {P(c - c^*) + (I - P)(b-b^*)} \right\| _2^{1 - \beta }}} \nonumber \\&= -\frac{{P\tilde{c} + (I - P)\tilde{b}}}{{\left\| {P\tilde{c} + (I - P)\tilde{b}} \right\| _2^{1 - \alpha }}} - \frac{{P\tilde{c} + (I - P)\tilde{b}}}{{\left\| {P\tilde{c} + (I - P)\tilde{b}} \right\| _2^{1 - \beta }}}. \end{aligned}$$
(45)

We have a derivation rule based on the changeable upper limit of the integral, therefore

$$\begin{aligned} \frac{dH_i(\tilde{\textit{u}}_j(t))}{dt}&= \frac{dH_i(\tilde{\textit{u}}_j(t))}{\tilde{\textit{u}}_j}\frac{d\tilde{\textit{u}}_j}{dt} = \frac{d \int_c^{\tilde{\textit{u}}_j(t)} {\alpha _i(\rho _{ij}(s))ds}}{d\tilde{\textit{u}}_j} = \alpha _i(\rho _{ij}(\tilde{\textit{u}}_j)) \dot{\tilde{\textit{u}}}_j=\tilde{c}_i \dot{\tilde{\textit{u}}}_j, \end{aligned}$$
(46)

and

$$\begin{aligned} \frac{dG_i(\tilde{\textit{u}}_j(t))}{dt}&= \frac{dG_i(\tilde{\textit{u}}_j(t))}{\tilde{\textit{u}}_j}\frac{d\tilde{\textit{u}}_j}{dt} = \frac{d \int_c^{\tilde{\textit{u}}_j(t)} {\beta _i(\rho _{ij}(s))ds}}{d\tilde{\textit{u}}_j} = \beta _i(\rho _{ij}(\tilde{\textit{u}}_j)) \dot{\tilde{\textit{u}}}_j=\tilde{b}_i \dot{\tilde{\textit{u}}}_j. \end{aligned}$$
(47)

Therefore,

$$\begin{aligned} \frac{dTr(PH(\tilde{\textit{u}}))}{dt} = \frac{d\sum _{i=1}^{N} \sum _{j=1}^{N} p_i^T p_j H_i(\tilde{\textit{u}}_j(t))}{dt} = \sum _{i=1}^{N} \sum _{j=1}^{N} p_i^T p_j \tilde{c}_i \dot{\tilde{\textit{u}}}_j = (P\tilde{c})^T \dot{\tilde{\textit{u}}}(t), \end{aligned}$$
(48)

and

$$\begin{aligned} \frac{dTr\{(I-P)G(\tilde{\textit{u}})\}}{dt}&= \frac{d\sum _{i=1}^{N} G_i(\tilde{\textit{u}}_i(t))}{dt} - \frac{d\sum _{i=1}^{N} \sum _{j=1}^{N} p_i^T p_j G_i(\tilde{\textit{u}}_j(t))}{dt} \nonumber \\&= \sum _{i=1}^{N} \tilde{b}_i \dot{\tilde{\textit{u}}}_j(t) - \sum _{i=1}^{N} \sum _{j=1}^{N} p_i^T p_j \tilde{b}_i \dot{\tilde{\textit{u}}}_j = ((I-P)\tilde{b})^T \dot{\tilde{\textit{u}}}(t). \end{aligned}$$
(49)

Thus, by (20), we get

$$\begin{aligned} \dot{V}_2&= \frac{dV^T}{d\tilde{\textit{u}}} \frac{d\tilde{\textit{u}}}{dt} = (P\tilde{c} + (I - P)\tilde{b})^T \dot{\tilde{\textit{u}}}(t) \nonumber \\&= (P\tilde{c} + (I - P)\tilde{b})^T\left[-\frac{{P\tilde{c} + (I - P)\tilde{b}}}{{\left\| {P\tilde{c}(t) + (I - P)\tilde{b}} \right\| _2^{1 - \alpha }}} - \frac{{P\tilde{c} + (I - P)\tilde{b}}}{{\left\| {P\tilde{c} + (I - P)\tilde{b}} \right\| _2^{1 - \beta }}}\right] \nonumber \\&= -\left\| {P\tilde{c} + (I - P)\tilde{b}} \right\| _2^{1 + \alpha } - \left\| {P\tilde{c} + (I - P)\tilde{b}} \right\| _2^{1 + \beta }, \end{aligned}$$
(50)

we get \(\dot{V}_2 \le 0\).

Proof of Theorem 3

With respect to Lemma 5, one obtains,

$$\begin{aligned} V_2(\tilde{\textit{u}}) \le \psi \Vert \tilde{\textit{u}}\Vert _2^2, \end{aligned}$$

and, form Lemma 4, we have

$$\begin{aligned} \xi \Vert \tilde{\textit{u}}\Vert _2^2 \le \Vert P\tilde{c} + (I - P)\tilde{b}\Vert _2^2, \end{aligned}$$

so that we can obtain

$$\begin{aligned} \Vert P\tilde{c} + (I - P)\tilde{b}\Vert _2^2 \ge \frac{\xi }{\psi } V_2(\tilde{\textit{u}}). \end{aligned}$$
(51)

By Eq. (51), we get

$$\begin{aligned}&\Vert P\tilde{c} + (I - P)\tilde{b}\Vert _2^{1+\alpha } \ge \left( \frac{\xi V_2(\tilde{\textit{u}})}{\psi }\right) ^{(1+\alpha )/2} \nonumber \\&\Vert P\tilde{c} + (I - P)\tilde{b}\Vert _2^{1+\beta } \ge \left( \frac{\xi V_2(\tilde{\textit{u}})}{\psi }\right) ^{(1+\beta )/2}. \end{aligned}$$
(52)

From Eqs. (50) and (52),

$$\begin{aligned} \dot{V_2}(\tilde{\textit{u}})&= (P\tilde{c} + (I - P)\tilde{b})^T \left[ -\frac{{P\tilde{c} + (I - P)\tilde{b}}}{{\left\| {P\tilde{c} + (I - P)\tilde{b}} \right\| _2^{1 - \alpha }}} - \frac{{P\tilde{c} + (I - P) \tilde{b}}}{{\left\| {P\tilde{c} + (I - P)\tilde{b}} \right\| _2^{1 - \beta }}}\right] \nonumber \\&= - \left\| {P\tilde{c} + (I - P)\tilde{b}} \right\| _2^{1 + \alpha } - \left\| {P\tilde{c} + (I - P)\tilde{b}} \right\| _2^{1 + \beta } \nonumber \\&\le -(\frac{\xi V_2(\tilde{\textit{u}})}{\psi })^{(1+\alpha )/2} - \left( \frac{\xi V_2(\tilde{\textit{u}})}{\psi }\right) ^{(1+\beta )/2}= - \left( \frac{\xi V_2(\tilde{\textit{u}})}{\psi }\right) ^{1+(\alpha +\beta )/2}. \end{aligned}$$
(53)

Based on Lemmas 6 and 2, the following conclusion can be drawn. The settling time \(\mathcal {T}\) satisfies:

$$\begin{aligned} \mathcal {T} \le \mathcal {T}_{\max } = \frac{2\left( \frac{\xi }{\psi }\right) ^{-(1+\alpha )/2}}{1-\alpha } + \frac{2\left( \frac{\xi }{\psi }\right) ^{-(1+\beta )/2}}{\beta -1}. \end{aligned}$$
(54)

Using Lemma 3, for all \(t \ge \mathcal {T}\) and random initial conditions, \(V_2=0\).

And from (B31), one obtains,

$$\begin{aligned} \dot{V_2}(\tilde{\textit{u}}(t)) \le -\left( \frac{\xi }{\psi }\right) ^{(1+\alpha )/2} V_2(\tilde{\textit{u}}(t))^{(1+\alpha )/2} - \left( \frac{\xi }{\psi }\right) ^{(1+\beta )/2}V_2(\tilde{\textit{u}}(t))^{(1+\beta )/2}. \end{aligned}$$
(55)

Define a positive constant \(t_1\) to prove fixed-time convergence, and \(V_2(\tilde{\textit{u}}(t_1))=1\). This leads to

$$\begin{aligned}&V_2(\tilde{\textit{u}}(t))^{1-(1+\alpha )/2}+ V_2(\tilde{\textit{u}}(t_1))^{1-(1+\beta )/2}-V_2(\tilde{\textit{u}} (t_1))^{1-(1+\alpha )/2}-V_2(\tilde{\textit{u}}(0))^{1-(1+\beta )/2}\nonumber \\ &\quad= \int\limits _{t_1}^{\mathcal {T}}\left( \frac{1-\alpha }{2}\right) V_2(\tilde{\textit{u}} (t))^{-(1+\alpha )/2}\dot{V}_2(\tilde{\textit{u}}(t))dt+\int \limits_0^{t_1} \left( \frac{1-\alpha }{2}\right) V_2(\tilde{\textit{u}}(t))^{-(1+\beta )/2} \dot{V}_2(\tilde{\textit{u}}(t))dt \nonumber \\ &\quad\le-\,\left( \frac{\xi }{\psi }\right) ^{(1+\alpha )/2} \int \limits^{\mathcal {T}}_{t_1} \frac{1-\alpha }{2}dt-\left( \frac{\xi }{\psi }\right) ^{(1+\beta )/2} \int \limits^{t_1}_{0} \frac{1-\beta }{2}dt \nonumber \\ &\quad =-\,\left( \frac{\xi }{\psi }\right) ^{(1+\alpha )/2} \frac{1-\alpha }{2}({\mathcal {T}}-t_1)-\left(\frac{\xi }{\psi }\right)^{(1+\beta )/2} \frac{1-\beta }{2} t_1. \end{aligned}$$
(56)

And then, the converge time can be obtained as follows:

$$\begin{aligned} {\mathcal {T}}-t_1 \le \left( \frac{\psi }{\xi }\right) ^{\frac{1+\alpha }{2}}\frac{2}{(1-\alpha )}, \end{aligned}$$
(57)

that is,

$$\begin{aligned} {\mathcal {T}}\le \left( \frac{\psi }{\xi }\right) ^{\frac{1+\alpha }{2}}\frac{2}{(1-\alpha )}+t_1. \end{aligned}$$
(58)

Meanwhile,

$$\begin{aligned} -\left( \frac{\xi }{\psi }\right) ^{\frac{1+\beta }{2}}\frac{(1-\beta )}{2} t_1=1-V_2(\tilde{\textit{u}}(0))^{\frac{(1-\beta )}{2}}\le 1, \end{aligned}$$
(59)

then,

$$\begin{aligned} t_1\le \frac{2}{(\beta -1)}\left( \frac{\psi }{\xi }\right) ^{\frac{1+\beta }{2}}. \end{aligned}$$
(60)

Substituting (60) into (58) yields the converge time,

$$\begin{aligned} {\mathcal {T}}\le \left( \frac{\psi }{\xi }\right) ^{\frac{1+\alpha }{2}}\frac{2}{(1-\alpha )}+\frac{2}{(\beta -1)}\left( \frac{\psi }{\xi }\right) ^{\frac{1+\beta }{2}}. \end{aligned}$$
(61)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xu, J., Li, C., He, X. et al. Projection neural networks with finite-time and fixed-time convergence for sparse signal reconstruction. Neural Comput & Applic 36, 425–443 (2024). https://doi.org/10.1007/s00521-023-09015-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-023-09015-9

Keywords

Navigation