Elsevier

Neurocomputing

Volume 406, 17 September 2020, Pages 99-105
Neurocomputing

An inertial projection neural network for solving inverse variational inequalities

https://doi.org/10.1016/j.neucom.2020.04.023Get rights and content

Abstract

A novel inertial projection neural network (IPNN) is proposed for solving inverse variational inequalities (IVIs) in this paper. It is shown that the IPNN has a unique solution under the condition of Lipschitz continuity and that the solution trajectories of the IPNN converge to the equilibrium solution asymptotically if the corresponding operator is co-coercive. Finally, several examples are presented to illustrtae the effectiveness of the proposed IPNN.

Introduction

Variational inequalities are an effective mathematical model in many different fields such as signal and image processing, physics, nonlinear optimization, economics, finance, transportation, elasticity, and automatic control, and have enjoyed an rapid growth in theories, algorithms and applications [1], [2], [3], [4], [5].

A typical variational inequality is defined as follows: find xRn such thatΦ(x),yx0,yH,where Φ:RnRn is a real valued function, ⟨ · ,  · ⟩ denotes the inner products of Rn, and H is a nonempty closed convex set in Rn [6], [7], [8]. In this case, the variational inequality is denoted as VI(H, Φ). If an inverse function x=Φ1(u)=Ψ(u) exists, then the above variational inequality VI(H, Φ) can be transformed as the following inverse variational inequality, denoted as IVI(H, Ψ) [9], [10]: find uRn such that Ψ(u) ∈ H andu,zΨ(u)0,zH.Both variational inequalities and inverse variational inequalities have been studied extensively, and many methods and numerical algorithms have been developed for solving them [11], [12], [13], [14], [15], [16]. One of them is the so-called projection operator method with the projection operator PH:RnH defined as follows,PH(z)=argminyHyz.The key idea in the projection operator method is to establish the equivalence between VI(H, Φ) and some fixed point problem. In fact, the formulation (3) plays a significant role in studying many problems such as nonsmooth optimization [17], distributed algorithm [18], [19], solving VI(H, Φ) [20] and sparse signal reconstruction [21].

On the other hand, in many science and engineering applications, real-time solutions of VI(H, Φ) are often necessary. Due to various constraints and complexities of practical problems, a feasible approach in such scenarios is to apply artificial neural networks [22], [23], [24], [25]. Recently, projected neural networks (PNNs) have been proposed to solve VI(H, Φ) and nonlinear programming problems. For solving VI(H, Φ), Liang and Si [26], Xia and Wang [27], Xia and Feng [28] utilized the following PNNdxdt=ρ{x(t)+PH(x(t)Φ(x(t)))},where ρ > 0 is a design parameter and PH is the projection operator defined in (3). They proved the existence and uniqueness of solutions for VI(H, Φ) and obtained global exponential stability of the equilibrium point for PNN (4) under the condition that Φ is Lipschitz. Hu and Wang [29] studied the convergence of the PNN (4) presented in [26], [27], [28] for solving pseudomonotone VI(H, Φ). Liu et al. [30] designed continuous and discrete time one-layer PNN (4) for a class of constrained variational inequalities. Eshaghnezhad et al. [31] proved the Lyapunov stability and global convergence of the proposed PNN when mapping Φ is strong pseudo-monotone. Gao and Liao [32] presented a novel PNN for solving general constrained variational inequalities. Ha et al. [33] investigated the global exponential stability of equilibrium solutions of PNN (4) for solving VI(H, Φ). Based on [33], Vuong [34] obtained the global exponential stability of the PNN (4) for solving VI(H, Φ) by using strong pseudomonotonity and Lipschitz continuity of Φ. Zou et al. [35] solved IVI(H, Ψ) based on the following PNN (5) with a simple one-layer structure:dudt=β{Ψ(u(t))+PH(Φ(u(t))u(t))},where β > 0 is a design parameter and PH is the projection operator defined in (3). They showed that the PNN (5) is globally convergent to the equilibrium solution of the IVI(H, Ψ) if Ψ is strongly monotone.

More recently, in order to overcome some drawbacks of PNNs, He et al. [36] proposed the following IPNN for solving VI(H, Φ) :{dxdt=y(t)dydt=βy(t)+PH(x(t)Φ(x(t)))x(t)where β > 0 is a design parameter and PH is a projection operator defined in (3). They showed that the IPNN (6) is convergent to the equilibrium solution of the VI(H, Φ). In addition, IPNNs have been utilized for solving a nonconvex l12 minimization problem [37], nonnegative matrix factorization [38], [39], and for solving a general sparse signal recovery minimization problem [40]. However, to our best knowledge, IPNNs have not been exploited for solving IVI(H, Ψ), which is the primary motivation for this work. The main contributions of this work can be summarized as follows.

(1) This is the first work to reveal that IPNNs can be utilized solving IVI(H, Ψ) and a new IPNN is proposed for the purpose.

(2) Traditional algorithms for solving IVI(H, Ψ) and other optimization problems such as those in [9], [10], [11], [12], [13], [14], [15], [16] can readily trap into local optimal solutions, and these algorithms are critically dependent on initial conditions, while the new IPNN proposed in this work can avoid these problems.

(3) Under the assumption that the function Ψ is Lipschitz continuous, the newly proposed IPNN is proved to converge to the equilibrium solution.

The rest of the paper is organized as follows. In Section 2, we present some basic definitions and concepts. In Section 3, we study the existence, uniqueness of solutions to the proposed IPNN under the condition that Ψ is Lipschitz continuous and then the stability of IPNN. In Section 4, simulations on numerical examples show the effectiveness and performance of the IPNN (8), which is followed by some conclusions in Section 5.

Section snippets

Preliminaries

Throughout this paper, unless otherwise specified, let Rn be an n-dimensional vector space, the norm of Rn be denoted by ‖ · ‖, the inner products of Rn be denoted by ⟨ · ,  · ⟩ and H be a nonempty closed convex set in Rn. Assume Ψ:RnH is a real valued function. The solution set of IVI(H, Ψ) is denoted by Γ.

For the convenience of later discussion, some basic definitions and lemmas are given as follows.

Definition 1

[31] Let F:RnRn be a continuous map. The mapping F is said to be co-coercive on Rn if there

Convergence analysis of IPNN (8) for IVI(H, Ψ)

In this section, the convergence and optimality of the proposed IPNN are proven. Firstly, the existence of the solution and their uniqueness for system (8) are summarized as follows.

Theorem 1

Let Ψ be Lipschitz continuous with a constant L > 0. Then for each x0R2n, there exists a unique continuous solution x(t) for (8) with x(0)=x0 and t[0,+).

Proof

Let x1=(u1T,z1T)T, x2=(u2T,z2T)TR2n with the same initial values x(0)=x0. For i=1,2, letT(xi,t)=(zi(t)βzi(t)+PH(Ψ(ui(t))ui(t))Ψ(ui(t))).ThusT(x1,t)T(x2,t)

Numerical examples

In this section, four numerical examples are presented to show the effectiveness of the IPNN (8) in solving IVI(H, Ψ).

Example 1

Consider a case of IVI(H, Ψ) with Ψ(u)=2u+1, where H={uR:0u1}. This IVI(H, Ψ) has a unique solution u^=0. By simple calculations, one can verify that Ψ is Lipschtiz continuous with constant L=3 and F(u)=Ψ(u)PH(Ψ(u)u) is co-coercive with constant η=7. Let β=4, then η < β2. According to Theorem 2, the IPNN (8) converges to u^=0. Transient responses of IPNN (8) with 10 random

Conclusions

This paper discussed IPNN for solving IVIs and related optimization problems. The inertial projection neural network has a unique solution under condition that the corresponding operator is Lipschitz continuous. Moreover, the solution trajectories of the IPNN converge to the equilibrium solution asymptotically. Some numerical examples have shown that the proposed IPNN is efficient in solving IVIs.

CRediT authorship contribution statement

Xingxing Ju: Conceptualization, Methodology, Investigation, Software, Writing - original draft. Chuandong Li: Funding acquisition, Writing - review & editing. Xing He: Conceptualization, Methodology. Gang Feng: Writing - review & editing.

Declaration of Competing Interest

The authors declare that they have no known competing financialinterestsor personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgments

This work was supported in part by the National Natural Science Foundation of China under Grant 61873213, Grant 61633011, Grant 61773320, in part by the Fundamental Research Funds for the Central Universities under Grant XDJK2020TY003, in part by National Key Research and Development Project under Grant 2018AAA0100101, and in part by the Chongqing Research Program of Basic Research and Frontier Technology under Grant cstc2015jcyjBX0052.

Xingxing Ju received the M.S. degree in School of Mathematics and Statistics, Southwest University, Chongqing, China, in 2019. He is currently pursuing the Ph.D. degree with the College of Electronic and Information Engineering, Southwest University, Chongqing, China. His current research interests include multiagent systems and control, neural networks, and distributed optimization.

References (41)

  • Q. Liu et al.

    A recurrent neural network based on projection operator for extended general variational inequalities, IEEE trans

    Syst. Man Cybern. Part B.

    (2010)
  • Q. Liu et al.

    l1-minimization algorithms for sparse signal reconstruction based on a projection neural network

    IEEE Trans. Neural Netw. Learn. Syst.

    (2016)
  • M. Bliemer et al.

    Quasi-variational inequality formulation of the multiclass dynamic traffic assignment problem

    Transp. Res. B: Methodol.

    (2003)
  • B. Huang et al.

    A new result for projection neural networks to solve linear variational inequalities and related optimization problems

    Neural Comput. Appl.

    (2013)
  • A. Khalid et al.

    A qualitative mathematical analysis of a class of linear variational inequalities via semi-complementarity problems: applications in electronics

    Math. Program.

    (2011)
  • B. He et al.

    Solving a class of constrained black-box inverse variational inequalities

    Eur. J. Oper. Res.

    (2010)
  • P. Anh et al.

    Self-adaptive gradient projection algorithms for variational inequalities involving non-lipschitz continuous operators

    Num. Alg.

    (2019)
  • Y. Malitsky

    Golden ratio algorithms for variational inequalities

    Math. Program.

    (2019)
  • M. Solodov et al.

    Modified projection-type methods for monotone variational inequalities

    SIAM J. Control Optim.

    (1996)
  • Y. Malitsky

    Reflected projected gradient method for solving monotone variational inequalities

    SIAM J. Optim.

    (2015)
  • Cited by (9)

    • A modified projection neural network with fixed-time convergence

      2022, Neurocomputing
      Citation Excerpt :

      Moreover, He et al.[16] proposed the inertial PNN (IPNN) for solving the VIs problems, and the IPNN is proved to be convergent to the optimal solution of the VIs. Ju et al.[17] proposed a novel IPNN for solving the inverse variational inequalities. The projection neural network methods have been widely applied to constrained optimization problems, VIs and sparse signal recovery problems.

    • Exponential convergence of a proximal projection neural network for mixed variational inequalities and applications

      2021, Neurocomputing
      Citation Excerpt :

      Further, in many science and engineering applications, real-time solutions of optimization problems are often essential. Owing to assorted constraints and complexities of practical problems, a plausible approach in such scenarios is to utilize artificial neural networks (NNs) [14–19]. A good example is to utilize the so-called projected neural networks (PNNs) for the solution of VI [20].

    • A proximal neurodynamic model for solving inverse mixed variational inequalities

      2021, Neural Networks
      Citation Excerpt :

      Vuong (2019) further studied the global exponential stability of the FPNN. Subsequently, the PND approach has been extended to solve equilibrium problems, IVIs and constrained quadratic optimization problems (Gao, Liao, & Qi, 2005; He, Huang, Yu, Li, & Li, 2017; Ju, Li, He, & Gang, 2020; Vuong & Strodiot, 2020). The insightful works (Chen, Ju, Kobis, & Liou, 2020; Garg, Baranwal, Gupta, Vasudevan, & Panagou, 2019; He, He, & Liu, 2010; Jiang et al., 2020; Li, Li, & Huang, 2014; Luo, 2014; Xu, Dey, & Vetrivel, 2020; Zou, Gong, Wang, & Chen, 2016) are mostly related to the topic of this paper.

    • A Fixed-Time Noise-Tolerance Neurodynamic Approach for Inverse Variational Inequalities

      2023, IEEE Transactions on Circuits and Systems II: Express Briefs
    • AN INERTIAL INVERSE-FREE DYNAMICAL SYSTEM FOR SOLVING ABSOLUTE VALUE EQUATIONS

      2023, Journal of Industrial and Management Optimization
    View all citing articles on Scopus

    Xingxing Ju received the M.S. degree in School of Mathematics and Statistics, Southwest University, Chongqing, China, in 2019. He is currently pursuing the Ph.D. degree with the College of Electronic and Information Engineering, Southwest University, Chongqing, China. His current research interests include multiagent systems and control, neural networks, and distributed optimization.

    Chuandong Li received the B.S. degree in applied mathematics from Sichuan University, Chengdu, China, in 1992, and the M.S. degree in operational research and control theory and the Ph.D. degree in computer software and theory from Chongqing University, Chongqing, China, in 2001 and 2005, respectively. From 2006 to 2008, he was a Research Fellow with the Department of Manufacturing Engineering and Engineering Management, City University of Hong Kong, Hong Kong. He has been a Professor with the College of Electronic and Information Engineering, Southwest University, Chongqing, since 2012. He has published over 200 journal papers. His current research interests include computational intelligence, neural networks, memristive systems, chaos control and synchronization, and impulsive dynamical systems.

    Xing He received the B.S. degree in mathematics and applied mathematics from the Department of Mathematics, Guizhou University, Guiyang, China, in 2009 and the Ph.D. degree in computer science and technology from Chongqing University, Chongqing, China, in 2013. He is currently a Professor with the School of Electronics and Information Engineering, Southwest University, Chongqing. From 2012 to 2013, he was a Research Assistant with the Texas A&M University at Qatar, Doha, Qatar. From 2015 to 2016, he was a Senior Research Associate with the City University of Hong Kong, Hong Kong. His current research interests include neural networks, bifurcation theory, optimization method, smart grids, and nonlinear dynamical systems.

    Gang Feng received the Ph.D. degree in Electrical Engineering from the University of Melbourne, Australia. He has been with City University of Hong Kong since 2000 after serving as lecturer/senior lecturer at School of Electrical Engineering, University of New South Wales, Australia, 1992–1999. He is now Chair Professor of Mechatronic Engineering. He has been awarded an Alexander von Humboldt Fellowship, the IEEE Transactions on Fuzzy Systems Outstanding Paper Award, Changjiang chair professorship from Education Ministry of China, and CityU Outstanding Research Award. He is listed as a SCI highly cited researcher by Clarivate Analytics. His current research interests include multi-agent systems and control, intelligent systems and control, and networked systems and control. Prof. Feng is an IEEE Fellow, an associate editor of IEEE Trans. Fuzzy Systems and Journal of Systems Science and Complexity, and was an associate editor of IEEE Trans. Automatic Control, IEEE Trans. Systems, Man & Cybernetics, Part C, Mechatronics, and Journal of Control Theory and Applications. He is on the Advisory Board of Unmanned Systems.

    View full text