A Jacobian smoothing method for box constrained variational inequality problems

https://doi.org/10.1016/j.amc.2004.03.018Get rights and content

Abstract

In this paper we propose a Jacobian smoothing algorithm for the solution of the box constrained variational inequality problems VIP(l,u,F). The algorithm is based on a reformulation of a semismooth system of equations by using the Fischer–Burmeister function h(a,b)=a2+b2−a−b. Global and local superlinear convergence results for VIP(l,u,F) are obtained. Numerical experiments confirm the good theoretical properties of the algorithm.

Introduction

Let F:RnRn be a continuously differentiable mapping and X be a nonempty closed convex set in Rn. The variational inequality problems, denoted by VIP(X,F), is to find a vector x*X such thatF(x*)T(x−x*)⩾0forallx∈X.A box constrained variational inequality problem, denoted by VIP(l,u,F), hasX={x∈Rn|l⩽x⩽u},where liR∪{−∞},uiR∪{+∞} and ui>li,i=1,…,n. Further, if X=R+n, VIP(X,F) reduces to the nonlinear complementarity problem, denoted NCP(F), which is to find xRn such thatx⩾0,F(x)⩾0,xTF(x)=0.Two comprehensive surveys of variational inequality problems and nonlinear complementarity problems are [1] and [3].

A basic idea of many algorithms for the solution of VIP(l,u,F) is to reformulate this problem as a nonlinear system of equations:H(z)=0,where H is a nonsmoothing mapping. For nonsmooth property of operator H, we cannot generally use the classical Newton method in order to solve problem (1.1).

It is not difficult to see that VIP(l,u,F) is equivalent to its KKT systemF(x)+y−z=0,xi−li⩾0,zi⩾0,(xi−li)zi=0,i=1,…,n,ui−xi⩾0,yi⩾0,(ui−xi)yi=0,i=1,…,n.Further, above relations are equivalent to following systemxi−li⩾0,Fi(x)+yi⩾0,(xi−li)(Fi(x)+yi)=0,i=1,…,n,ui−xi⩾0,yi⩾0,(ui−xi)yi=0,i=1,…,n.

Recently much effort has been made to construct smoothing approximation functions for approach to the solution of VIP(l,u,F) or NCP(F). This class of algorithms, called Jacobian smoothing method, is due to Chen et al. [12]. These methods try to solve at each iteration step the generalized Newton equationHμ(zk)d=−H(zk).However, the algorithm and convergence theory developed by [12] still relies on the fact that the linear equations (1.3) are solvable at each iteration step, and this assumption is intimately related to F being a P0-function. Therefore also, this Jacobian smoothing method is not well defined for box constrained variational inequality problems. In this paper, we will concentrate on one particular reformulation of VIP(l,u,F) and propose a new Jacobian smoothing method such that it becomes well defined for general box constrained variational inequality problems. Using Fischer–Burmeister function h:R2R defined by (see [6])h(a,b)=a2+b2−a−b,we are easy to see, by (1.2), that VIP(l,u,F) is equivalent to the following nonlinear systemH(z)=h(x1−l1,F1(x)+y1)h(xn−ln,Fn(x)+yn)h(u1−x1,y1)h(un−xn,yn)=0,where z=(xT,yT)TR2n, the mapping H:R2nR2n.

The globalization strategy for our algorithm is mainly based on the natural merit function θ:R2nR+ given byθ(z)=12∥H(z)∥2,where z=(xT,yT)TR2n. The corresponding smooth operator Hμ:R2nR2n is defined similarly byHμ(z)=hμ(x1−l1,F1(x)+y1)hμ(xn−ln,Fn(x)+yn)hμ(u1−x1,y1)hμ(un−xn,yn),where z=(xT,yT)TR2n and hμ:R2R denotes smooth approximationhμ(a,b):=a2+b2−a−b,a2+b2>μ,a(a−2μ)+b(b−2μ)+μ2,a2+b2⩽μ,of the Fischer–Burmeister function. It is easy to see that for a2+b2⩽μ,|hμ(a,b)−h(a,b)|⩽(a2+b2−μ)2μ2and hμ(a,b) is continuously differentiable with gradient being determined by(a,b)Thμ(a,b)=aa2+b2−1,ba2+b2−1T,a2+b2>μ,(a−μμ,b−μμ)T,a2+b2⩽μ.Similarly, the merit function θμ:R2nR+ given byθμ(z)=12∥Hμ(z)∥2,where z=(xT,yT)TR2n.

Next we introduce some words about our notation: Let G:RnRm be continuously differentiable. The G(x)∈Rm×n denotes the Jacobian of G at a point xRn, whereas the symbol ∇G(x) is used for the transposed Jacobian. In particular, if m=1, the gradient ∇G(x) is viewed as a column vector. If is G:RnRm only local Lipschitzian, we can define Clarke's [10] generalized Jacobian as follows:G(x):=conv{H∈Rm×n|∃{xk}⊆DG:xk→xandG(xk)→H};here DG denotes the set of differentiable points of G and convS is the convex hull of a set S. If m=1, we call ∂G(x) the generalized gradient of G at x for obvious reasons.

Usually, ∂G(x) is not easy to compute, especially for m>1. Based on this reason, we use in this paper a kind of generalized Jacobian for the function G, denoted by ∂CG and defined as (see [11])CG=G1(x)×G2(x)×⋯×Gn(x),where Gi(x) is ith component function of G.

Furthermore, we denote by ∥x∥ the Euclidian norm of x if xRn and by ∥A∥ the spectral norm of a matrix ARn×n which is the induced matrix norm of the Euclidian vector norm. Sometimes we also need the Frobenius norm ∥AF of a matrix ARn×n. If ARn×n is any given matrix and M⊆Rn×n is a nonempty set of matrices, we denote by dist(A,M):=infB∈M∥A−B∥ the distance between A and M. Corresponding the spectral norm and Frobenius norm, dist(A,M) is sometimes also written as dist2(A,M) and distF(A,M), respectively.

The remainder of the paper is organized as follows: In the next section, the mathematical background and some preliminary results are summarized. In Section 3, the Jacobian consistency property and the Jacobian smoothing idea are discussed. The algorithm is proposed in detail in Section 4. 5 Global convergence, 6 Local convergence are devoted to proving global local superlinear convergence of the algorithm. Numerical results are reported in Section 7.

Section snippets

Preliminaries

In this section, we summarize some properties of the function H,Hμ and θ. In addition, we prove some preliminary results which will be used latter.

Firstly, we can get following results from the definition of the C-subdifferential and Proposition 3.1 in [2].

Proposition 2.1

For an arbitrary z=(xT,yT)TR2n, we haveCH(z)T=(H1(z),H2(z),…,H2n(z)),where Hj(z),j=1,2,…,2n, denotes the jth component function of H and defined byHi(z)=ai(z)eiT+bi(z)Fi(x)T,bi(z)eiTT,i∈α1(z),i−1)eiT+(ηi−1)Fi(x)T,(ηi−1)eiTT,i∈α2(z),

Jacobian consistency property

First we introduce the definition of the Jacobian consistency property (see [12]).

Definition 3.1

Let G(·) be a Lipschitz function in Rn and Gμ(·) be corresponding smoothing approximation of G(·). If for any xRn holdlimμ↓0dist(Gμ(x),CG(x))=0,then we say that Gμ(·) satisfies the Jacobian consistency property.

Lemma 3.2

Let z=(xT,yT)TR2n be arbitrary but fixed. Then function Hμ defined in (1.5) satisfies the Jacobian consistency property, i.e.,limμ↓0dist(Hμ(z),CH(z))=0.

Proof

From the definition of Hμ, we have for all μ>0,H

Algorithm

In this section, we give a detailed description of our Jacobian smoothing method and state some of its elementary properties. In particular, we show that the algorithm is well defined for an arbitrary box constrained variational inequality problem.

Algorithm 4.1 Jacobian Smoothing Method

  • (S.0)

    Choose z=(x0,y0)∈R2n,λ,η,α,ρ∈(0,1),γ>0,σ∈0,12(1−α), and 0⩽ϵ≪1. Set β0:=∥H(z0)∥,μ0=α2nβ0, and k:=0.

  • (S.1)

    If ∥∇θ(zk)∥⩽ϵ, stop.

  • (S.2)

    Seeking a solution dkR2n of the following linear equationsHμk(zk)d+H(zk)=0,(Newtonstep).If the (4.1) is not solvable or if the

Global convergence

We begin our global convergence analysis with the following observation.

Lemma 5.1

Let {zk=((xk)T,(yk)T)T}∈R2n be a sequence generated by Algorithm 4.1. Assume that {zk} has an accumulation point z*=((x*)T,(y*)T)TR2n, which is a solution of VIP(l,u,F). Then the index set K is infinite and {μk}→0.

Proof

Assume that K is finite. Then if follows from (4.6) and the updating rules for βk in step (S.4) of Algorithm 4.1 that there is a k0N such thatβkk0and∥H(zk+1)∥>max{ηβk−1∥H(zk+1)−Hμk(zk+1)∥}⩾ηβk=ηβk0for all kN

Local convergence

In this section, we want to show that Algorithm 4.1 is locally Q-superlinearly/Q-quadratically convergent under certain conditions. First let us state the following result in [9].

Proposition 6.1

Assume that z* is an isolated accumulation point of a sequence {zk} such that {∥zk+1zk∥}L→0 for any subsequence {zk}L converging to z*. Then the whole sequence {zk} converges to z*.

By Proposition 6.1, we can obtain the following result.

Theorem 6.2

Let {zk} be a sequence generated by Algorithm 4.1. If one of the accumulation point

Numerical experiments

In this section we present some numerical experiments for the algorithm proposed in Section 3. Throughout the computational experiments, the parameters used in Algorithm 2.1 were α=0.4,σ=0.25,ρ=0.95,η=0.5,γ=0.6. The stopping criterion is: ∥H(xk)∥⩽10−6.

Example 1

We first consider the following nonlinear complementarity problem. Test functions are of following forms:F(x)=3x12+2x1x2+2x22+x3+3x4−62x12+x1+x22+10x3+2x4−23x12+x1x2+2x22+2x3+9x4−9x12+3x22+2x3+3x4−3.Taking the constraint set X=[l,u] with l

References (20)

  • P.T. Harker et al.

    Finite-dimensional variational inequality and nonlinear complementarity problem; A survey of theory algorithms and applications

    Math. Program.

    (1990)
  • F. Facchinei et al.

    A new class of merit functions for nonlinear complementarity problem and related algorithm

    SIAM J. Optim.

    (1997)
  • J.S. Pang

    Complementarity Problems, in Handbook

    (1995)
  • L. Qi

    Convergence analysis of some algorithms for solving nonsmooth equations

    Math. Oper. Res.

    (1993)
  • A. Fischer

    Solution of monotone complementarity problems with locally Lipschitzian functions

    Math. Program.

    (1977)
  • A. Fischer

    A special Newton-type optimization method

    Optimization

    (1992)
  • C. Kanzow

    Some noninterior continuation methods for linear complementarity problems

    SIAM J. Matrix Anal.

    (1996)
  • C. Kanzow

    A new approach to continuation methods for complementarity problems with uniform P-function

    Oper. Res. Lett.

    (1977)
  • J.J. Moré et al.

    Computing a trust region step

    SIAM J. Sci. Stat. Comput.

    (1983)
  • F.H. Clarke

    Optimization and Nonsmooth Analysis

    (1983)
There are more references available in the full text version of this article.

Supported by the National Natural Science Foundation of China (Grant 10361003) and Guangxi Science Foundation.

View full text