An inertial projection neural network for solving inverse variational inequalities
Introduction
Variational inequalities are an effective mathematical model in many different fields such as signal and image processing, physics, nonlinear optimization, economics, finance, transportation, elasticity, and automatic control, and have enjoyed an rapid growth in theories, algorithms and applications [1], [2], [3], [4], [5].
A typical variational inequality is defined as follows: find such thatwhere is a real valued function, ⟨ · , · ⟩ denotes the inner products of and H is a nonempty closed convex set in [6], [7], [8]. In this case, the variational inequality is denoted as VI(H, Φ). If an inverse function exists, then the above variational inequality VI(H, Φ) can be transformed as the following inverse variational inequality, denoted as IVI(H, Ψ) [9], [10]: find such that Ψ(u) ∈ H andBoth variational inequalities and inverse variational inequalities have been studied extensively, and many methods and numerical algorithms have been developed for solving them [11], [12], [13], [14], [15], [16]. One of them is the so-called projection operator method with the projection operator defined as follows,The key idea in the projection operator method is to establish the equivalence between VI(H, Φ) and some fixed point problem. In fact, the formulation (3) plays a significant role in studying many problems such as nonsmooth optimization [17], distributed algorithm [18], [19], solving VI(H, Φ) [20] and sparse signal reconstruction [21].
On the other hand, in many science and engineering applications, real-time solutions of VI(H, Φ) are often necessary. Due to various constraints and complexities of practical problems, a feasible approach in such scenarios is to apply artificial neural networks [22], [23], [24], [25]. Recently, projected neural networks (PNNs) have been proposed to solve VI(H, Φ) and nonlinear programming problems. For solving VI(H, Φ), Liang and Si [26], Xia and Wang [27], Xia and Feng [28] utilized the following PNNwhere ρ > 0 is a design parameter and PH is the projection operator defined in (3). They proved the existence and uniqueness of solutions for VI(H, Φ) and obtained global exponential stability of the equilibrium point for PNN (4) under the condition that Φ is Lipschitz. Hu and Wang [29] studied the convergence of the PNN (4) presented in [26], [27], [28] for solving pseudomonotone VI(H, Φ). Liu et al. [30] designed continuous and discrete time one-layer PNN (4) for a class of constrained variational inequalities. Eshaghnezhad et al. [31] proved the Lyapunov stability and global convergence of the proposed PNN when mapping Φ is strong pseudo-monotone. Gao and Liao [32] presented a novel PNN for solving general constrained variational inequalities. Ha et al. [33] investigated the global exponential stability of equilibrium solutions of PNN (4) for solving VI(H, Φ). Based on [33], Vuong [34] obtained the global exponential stability of the PNN (4) for solving VI(H, Φ) by using strong pseudomonotonity and Lipschitz continuity of Φ. Zou et al. [35] solved IVI(H, Ψ) based on the following PNN (5) with a simple one-layer structure:where β > 0 is a design parameter and PH is the projection operator defined in (3). They showed that the PNN (5) is globally convergent to the equilibrium solution of the IVI(H, Ψ) if Ψ is strongly monotone.
More recently, in order to overcome some drawbacks of PNNs, He et al. [36] proposed the following IPNN for solving VI(H, Φ) :where β > 0 is a design parameter and PH is a projection operator defined in (3). They showed that the IPNN (6) is convergent to the equilibrium solution of the VI(H, Φ). In addition, IPNNs have been utilized for solving a nonconvex minimization problem [37], nonnegative matrix factorization [38], [39], and for solving a general sparse signal recovery minimization problem [40]. However, to our best knowledge, IPNNs have not been exploited for solving IVI(H, Ψ), which is the primary motivation for this work. The main contributions of this work can be summarized as follows.
(1) This is the first work to reveal that IPNNs can be utilized solving IVI(H, Ψ) and a new IPNN is proposed for the purpose.
(2) Traditional algorithms for solving IVI(H, Ψ) and other optimization problems such as those in [9], [10], [11], [12], [13], [14], [15], [16] can readily trap into local optimal solutions, and these algorithms are critically dependent on initial conditions, while the new IPNN proposed in this work can avoid these problems.
(3) Under the assumption that the function Ψ is Lipschitz continuous, the newly proposed IPNN is proved to converge to the equilibrium solution.
The rest of the paper is organized as follows. In Section 2, we present some basic definitions and concepts. In Section 3, we study the existence, uniqueness of solutions to the proposed IPNN under the condition that Ψ is Lipschitz continuous and then the stability of IPNN. In Section 4, simulations on numerical examples show the effectiveness and performance of the IPNN (8), which is followed by some conclusions in Section 5.
Section snippets
Preliminaries
Throughout this paper, unless otherwise specified, let be an n-dimensional vector space, the norm of be denoted by ‖ · ‖, the inner products of be denoted by ⟨ · , · ⟩ and H be a nonempty closed convex set in . Assume is a real valued function. The solution set of IVI(H, Ψ) is denoted by Γ.
For the convenience of later discussion, some basic definitions and lemmas are given as follows. Definition 1 [31] Let be a continuous map. The mapping F is said to be co-coercive on if there
Convergence analysis of IPNN (8) for IVI(H, Ψ)
In this section, the convergence and optimality of the proposed IPNN are proven. Firstly, the existence of the solution and their uniqueness for system (8) are summarized as follows. Theorem 1 Let Ψ be Lipschitz continuous with a constant L > 0. Then for each there exists a unique continuous solution x(t) for (8) with and . Proof Let with the same initial values . For letThus
Numerical examples
In this section, four numerical examples are presented to show the effectiveness of the IPNN (8) in solving IVI(H, Ψ). Example 1 Consider a case of IVI(H, Ψ) with where . This IVI(H, Ψ) has a unique solution . By simple calculations, one can verify that Ψ is Lipschtiz continuous with constant and is co-coercive with constant . Let then η < β2. According to Theorem 2, the IPNN (8) converges to . Transient responses of IPNN (8) with 10 random
Conclusions
This paper discussed IPNN for solving IVIs and related optimization problems. The inertial projection neural network has a unique solution under condition that the corresponding operator is Lipschitz continuous. Moreover, the solution trajectories of the IPNN converge to the equilibrium solution asymptotically. Some numerical examples have shown that the proposed IPNN is efficient in solving IVIs.
CRediT authorship contribution statement
Xingxing Ju: Conceptualization, Methodology, Investigation, Software, Writing - original draft. Chuandong Li: Funding acquisition, Writing - review & editing. Xing He: Conceptualization, Methodology. Gang Feng: Writing - review & editing.
Declaration of Competing Interest
The authors declare that they have no known competing financialinterestsor personal relationships that could have appeared to influence the work reported in this paper.
Acknowledgments
This work was supported in part by the National Natural Science Foundation of China under Grant 61873213, Grant 61633011, Grant 61773320, in part by the Fundamental Research Funds for the Central Universities under Grant XDJK2020TY003, in part by National Key Research and Development Project under Grant 2018AAA0100101, and in part by the Chongqing Research Program of Basic Research and Frontier Technology under Grant cstc2015jcyjBX0052.
Xingxing Ju received the M.S. degree in School of Mathematics and Statistics, Southwest University, Chongqing, China, in 2019. He is currently pursuing the Ph.D. degree with the College of Electronic and Information Engineering, Southwest University, Chongqing, China. His current research interests include multiagent systems and control, neural networks, and distributed optimization.
References (41)
- et al.
Finite-Dimensional variational inequalities and complementarity problems
(2003) - et al.
Solving policy design problems: alternating direction method of multipliers-based methods for structured inverse variational inequalities
Eur. J. Oper. Res.
(2020) - et al.
Inverse variational inequalities with projection-based solution methods
Eur. J. Oper. Res.
(2011) - et al.
A proximal minimization algorithm for structured nonconvex and nonsmooth problems
SIAM J. Optim.
(2019) - et al.
A bi-projection neural network for solving constrained quadratic optimization problems
IEEE Trans. Neural Netw. Learn. Syst.
(2015) - et al.
One-layer continuous-and discrete-time projection neural networks for solving variational inequalities and related optimization problems
IEEE Trans. Neural Netw. Learn. Syst.
(2013) - et al.
Graph sparse nonnegative matrix factorization algorithm based on the inertial projection neural network
Complexity
(2018) - et al.
Smoothing inertial projection neural network for minimization L in sparse signal reconstruction
Neural Netw.
(2018) - et al.
Design of cognitive radio systems under temperature-interference constraints: a variational inequality approach
IEEE Trans. Signal Process.
(2010) - et al.
Two projection neural networks with reduced model complexity for nonlinear programming
IEEE Trans. Neural Netw. Learn. Syst.
(2019)
A recurrent neural network based on projection operator for extended general variational inequalities, IEEE trans
Syst. Man Cybern. Part B.
l1-minimization algorithms for sparse signal reconstruction based on a projection neural network
IEEE Trans. Neural Netw. Learn. Syst.
Quasi-variational inequality formulation of the multiclass dynamic traffic assignment problem
Transp. Res. B: Methodol.
A new result for projection neural networks to solve linear variational inequalities and related optimization problems
Neural Comput. Appl.
A qualitative mathematical analysis of a class of linear variational inequalities via semi-complementarity problems: applications in electronics
Math. Program.
Solving a class of constrained black-box inverse variational inequalities
Eur. J. Oper. Res.
Self-adaptive gradient projection algorithms for variational inequalities involving non-lipschitz continuous operators
Num. Alg.
Golden ratio algorithms for variational inequalities
Math. Program.
Modified projection-type methods for monotone variational inequalities
SIAM J. Control Optim.
Reflected projected gradient method for solving monotone variational inequalities
SIAM J. Optim.
Cited by (9)
A modified projection neural network with fixed-time convergence
2022, NeurocomputingCitation Excerpt :Moreover, He et al.[16] proposed the inertial PNN (IPNN) for solving the VIs problems, and the IPNN is proved to be convergent to the optimal solution of the VIs. Ju et al.[17] proposed a novel IPNN for solving the inverse variational inequalities. The projection neural network methods have been widely applied to constrained optimization problems, VIs and sparse signal recovery problems.
Novel projection neurodynamic approaches for constrained convex optimization
2022, Neural NetworksExponential convergence of a proximal projection neural network for mixed variational inequalities and applications
2021, NeurocomputingCitation Excerpt :Further, in many science and engineering applications, real-time solutions of optimization problems are often essential. Owing to assorted constraints and complexities of practical problems, a plausible approach in such scenarios is to utilize artificial neural networks (NNs) [14–19]. A good example is to utilize the so-called projected neural networks (PNNs) for the solution of VI [20].
A proximal neurodynamic model for solving inverse mixed variational inequalities
2021, Neural NetworksCitation Excerpt :Vuong (2019) further studied the global exponential stability of the FPNN. Subsequently, the PND approach has been extended to solve equilibrium problems, IVIs and constrained quadratic optimization problems (Gao, Liao, & Qi, 2005; He, Huang, Yu, Li, & Li, 2017; Ju, Li, He, & Gang, 2020; Vuong & Strodiot, 2020). The insightful works (Chen, Ju, Kobis, & Liou, 2020; Garg, Baranwal, Gupta, Vasudevan, & Panagou, 2019; He, He, & Liu, 2010; Jiang et al., 2020; Li, Li, & Huang, 2014; Luo, 2014; Xu, Dey, & Vetrivel, 2020; Zou, Gong, Wang, & Chen, 2016) are mostly related to the topic of this paper.
A Fixed-Time Noise-Tolerance Neurodynamic Approach for Inverse Variational Inequalities
2023, IEEE Transactions on Circuits and Systems II: Express BriefsAN INERTIAL INVERSE-FREE DYNAMICAL SYSTEM FOR SOLVING ABSOLUTE VALUE EQUATIONS
2023, Journal of Industrial and Management Optimization
Xingxing Ju received the M.S. degree in School of Mathematics and Statistics, Southwest University, Chongqing, China, in 2019. He is currently pursuing the Ph.D. degree with the College of Electronic and Information Engineering, Southwest University, Chongqing, China. His current research interests include multiagent systems and control, neural networks, and distributed optimization.
Chuandong Li received the B.S. degree in applied mathematics from Sichuan University, Chengdu, China, in 1992, and the M.S. degree in operational research and control theory and the Ph.D. degree in computer software and theory from Chongqing University, Chongqing, China, in 2001 and 2005, respectively. From 2006 to 2008, he was a Research Fellow with the Department of Manufacturing Engineering and Engineering Management, City University of Hong Kong, Hong Kong. He has been a Professor with the College of Electronic and Information Engineering, Southwest University, Chongqing, since 2012. He has published over 200 journal papers. His current research interests include computational intelligence, neural networks, memristive systems, chaos control and synchronization, and impulsive dynamical systems.
Xing He received the B.S. degree in mathematics and applied mathematics from the Department of Mathematics, Guizhou University, Guiyang, China, in 2009 and the Ph.D. degree in computer science and technology from Chongqing University, Chongqing, China, in 2013. He is currently a Professor with the School of Electronics and Information Engineering, Southwest University, Chongqing. From 2012 to 2013, he was a Research Assistant with the Texas A&M University at Qatar, Doha, Qatar. From 2015 to 2016, he was a Senior Research Associate with the City University of Hong Kong, Hong Kong. His current research interests include neural networks, bifurcation theory, optimization method, smart grids, and nonlinear dynamical systems.
Gang Feng received the Ph.D. degree in Electrical Engineering from the University of Melbourne, Australia. He has been with City University of Hong Kong since 2000 after serving as lecturer/senior lecturer at School of Electrical Engineering, University of New South Wales, Australia, 1992–1999. He is now Chair Professor of Mechatronic Engineering. He has been awarded an Alexander von Humboldt Fellowship, the IEEE Transactions on Fuzzy Systems Outstanding Paper Award, Changjiang chair professorship from Education Ministry of China, and CityU Outstanding Research Award. He is listed as a SCI highly cited researcher by Clarivate Analytics. His current research interests include multi-agent systems and control, intelligent systems and control, and networked systems and control. Prof. Feng is an IEEE Fellow, an associate editor of IEEE Trans. Fuzzy Systems and Journal of Systems Science and Complexity, and was an associate editor of IEEE Trans. Automatic Control, IEEE Trans. Systems, Man & Cybernetics, Part C, Mechatronics, and Journal of Control Theory and Applications. He is on the Advisory Board of Unmanned Systems.