A Jacobian smoothing method for box constrained variational inequality problems☆
Introduction
Let F:Rn→Rn be a continuously differentiable mapping and X be a nonempty closed convex set in Rn. The variational inequality problems, denoted by VIP(X,F), is to find a vector x*∈X such thatA box constrained variational inequality problem, denoted by VIP(l,u,F), haswhere li∈R∪{−∞},ui∈R∪{+∞} and ui>li,i=1,…,n. Further, if X=R+n, VIP(X,F) reduces to the nonlinear complementarity problem, denoted NCP(F), which is to find x∈Rn such thatTwo comprehensive surveys of variational inequality problems and nonlinear complementarity problems are [1] and [3].
A basic idea of many algorithms for the solution of VIP(l,u,F) is to reformulate this problem as a nonlinear system of equations:where H is a nonsmoothing mapping. For nonsmooth property of operator H, we cannot generally use the classical Newton method in order to solve problem (1.1).
It is not difficult to see that VIP(l,u,F) is equivalent to its KKT systemFurther, above relations are equivalent to following system
Recently much effort has been made to construct smoothing approximation functions for approach to the solution of VIP(l,u,F) or NCP(F). This class of algorithms, called Jacobian smoothing method, is due to Chen et al. [12]. These methods try to solve at each iteration step the generalized Newton equationHowever, the algorithm and convergence theory developed by [12] still relies on the fact that the linear equations (1.3) are solvable at each iteration step, and this assumption is intimately related to F being a P0-function. Therefore also, this Jacobian smoothing method is not well defined for box constrained variational inequality problems. In this paper, we will concentrate on one particular reformulation of VIP(l,u,F) and propose a new Jacobian smoothing method such that it becomes well defined for general box constrained variational inequality problems. Using Fischer–Burmeister function h:R2→R defined by (see [6])we are easy to see, by (1.2), that VIP(l,u,F) is equivalent to the following nonlinear systemwhere z=(xT,yT)T∈R2n, the mapping H:R2n→R2n.
The globalization strategy for our algorithm is mainly based on the natural merit function θ:R2n→R+ given bywhere z=(xT,yT)T∈R2n. The corresponding smooth operator Hμ:R2n→R2n is defined similarly bywhere z=(xT,yT)T∈R2n and hμ:R2→R denotes smooth approximationof the Fischer–Burmeister function. It is easy to see that for ,and hμ(a,b) is continuously differentiable with gradient being determined bySimilarly, the merit function θμ:R2n→R+ given bywhere z=(xT,yT)T∈R2n.
Next we introduce some words about our notation: Let G:Rn→Rm be continuously differentiable. The G′(x)∈Rm×n denotes the Jacobian of G at a point x∈Rn, whereas the symbol ∇G(x) is used for the transposed Jacobian. In particular, if m=1, the gradient ∇G(x) is viewed as a column vector. If is G:Rn→Rm only local Lipschitzian, we can define Clarke's [10] generalized Jacobian as follows:here DG denotes the set of differentiable points of G and convS is the convex hull of a set S. If m=1, we call ∂G(x) the generalized gradient of G at x for obvious reasons.
Usually, ∂G(x) is not easy to compute, especially for m>1. Based on this reason, we use in this paper a kind of generalized Jacobian for the function G, denoted by ∂CG and defined as (see [11])where Gi(x) is ith component function of G.
Furthermore, we denote by ∥x∥ the Euclidian norm of x if x∈Rn and by ∥A∥ the spectral norm of a matrix A∈Rn×n which is the induced matrix norm of the Euclidian vector norm. Sometimes we also need the Frobenius norm ∥A∥F of a matrix A∈Rn×n. If A∈Rn×n is any given matrix and is a nonempty set of matrices, we denote by the distance between A and . Corresponding the spectral norm and Frobenius norm, is sometimes also written as and , respectively.
The remainder of the paper is organized as follows: In the next section, the mathematical background and some preliminary results are summarized. In Section 3, the Jacobian consistency property and the Jacobian smoothing idea are discussed. The algorithm is proposed in detail in Section 4. 5 Global convergence, 6 Local convergence are devoted to proving global local superlinear convergence of the algorithm. Numerical results are reported in Section 7.
Section snippets
Preliminaries
In this section, we summarize some properties of the function H,Hμ and θ. In addition, we prove some preliminary results which will be used latter.
Firstly, we can get following results from the definition of the C-subdifferential and Proposition 3.1 in [2]. Proposition 2.1 For an arbitrary z=(xT,yT)T∈R2n, we havewhere Hj(z),j=1,2,…,2n, denotes the jth component function of H and defined by
Jacobian consistency property
First we introduce the definition of the Jacobian consistency property (see [12]). Definition 3.1 Let G(·) be a Lipschitz function in Rn and Gμ(·) be corresponding smoothing approximation of G(·). If for any x∈Rn holdthen we say that Gμ(·) satisfies the Jacobian consistency property. Lemma 3.2 Let z=(xT,yT)T∈R2n be arbitrary but fixed. Then function Hμ defined in (1.5) satisfies the Jacobian consistency property, i.e., Proof From the definition of Hμ, we have for all μ>0,
Algorithm
In this section, we give a detailed description of our Jacobian smoothing method and state some of its elementary properties. In particular, we show that the algorithm is well defined for an arbitrary box constrained variational inequality problem. Algorithm 4.1 Jacobian Smoothing Method Choose , and 0⩽ϵ≪1. Set , and k:=0. If ∥∇θ(zk)∥⩽ϵ, stop. Seeking a solution dk∈R2n of the following linear equationsIf the (4.1) is not solvable or if the
Global convergence
We begin our global convergence analysis with the following observation. Lemma 5.1 Let {zk=((xk)T,(yk)T)T}∈R2n be a sequence generated by Algorithm 4.1. Assume that {zk} has an accumulation point z*=((x*)T,(y*)T)T∈R2n, which is a solution of VIP(l,u,F). Then the index set K is infinite and {μk}→0. Proof Assume that K is finite. Then if follows from (4.6) and the updating rules for βk in step (S.4) of Algorithm 4.1 that there is a k0∈N such thatandfor all k∈N
Local convergence
In this section, we want to show that Algorithm 4.1 is locally Q-superlinearly/Q-quadratically convergent under certain conditions. First let us state the following result in [9]. Proposition 6.1 Assume that z* is an isolated accumulation point of a sequence {zk} such that {∥zk+1−zk∥}L→0 for any subsequence {zk}L converging to z*. Then the whole sequence {zk} converges to z*.
By Proposition 6.1, we can obtain the following result. Theorem 6.2 Let {zk} be a sequence generated by Algorithm 4.1. If one of the accumulation point
Numerical experiments
In this section we present some numerical experiments for the algorithm proposed in Section 3. Throughout the computational experiments, the parameters used in Algorithm 2.1 were α=0.4,σ=0.25,ρ=0.95,η=0.5,γ=0.6. The stopping criterion is: ∥H(xk)∥⩽10−6. Example 1 We first consider the following nonlinear complementarity problem. Test functions are of following forms:Taking the constraint set X=[l,u] with l
References (20)
- et al.
Finite-dimensional variational inequality and nonlinear complementarity problem; A survey of theory algorithms and applications
Math. Program.
(1990) - et al.
A new class of merit functions for nonlinear complementarity problem and related algorithm
SIAM J. Optim.
(1997) Complementarity Problems, in Handbook
(1995)Convergence analysis of some algorithms for solving nonsmooth equations
Math. Oper. Res.
(1993)Solution of monotone complementarity problems with locally Lipschitzian functions
Math. Program.
(1977)A special Newton-type optimization method
Optimization
(1992)Some noninterior continuation methods for linear complementarity problems
SIAM J. Matrix Anal.
(1996)A new approach to continuation methods for complementarity problems with uniform P-function
Oper. Res. Lett.
(1977)- et al.
Computing a trust region step
SIAM J. Sci. Stat. Comput.
(1983) Optimization and Nonsmooth Analysis
(1983)
Cited by (1)
A Penalty Approach for a Box Constrained Variational Inequality Problem
2018, Applications of Mathematics
- ☆
Supported by the National Natural Science Foundation of China (Grant 10361003) and Guangxi Science Foundation.