Numerical comparison between prediction–correction methods for general variational inequalities

https://doi.org/10.1016/j.amc.2006.08.001Get rights and content

Abstract

In this paper, we propose two prediction–correction methods for solving general variational inequalities. The first method makes use of a descent direction to produce the new iterate while the second method generates the corrector by the improved extragradient method. Under certain conditions, the global convergence of both methods is proved. It is proved theoretically that the lower-bound of the progress obtained by the second method is greater than that by the first one. Numerical results are given to verify and compare the numerical efficiency of these two predictor–corrector methods.

Introduction

In recent years, classical variational inequality and complementarity problems have been extended and generalized to study a wide range of problems arising in mechanics, physics, optimization and applied sciences, see [1], [2], [3], [4], [5], [6], [7], [8], [9], [10], [11], [12], [13], [14], [15], [16], [17], [18], [19]. In 1988, Noor [11] introduced and considered a new class of variational inequalities involving two operators, which is known as the general variational inequalities. It turned out odd-order and non-symmetric obstacle, free, moving, unilateral and equilibrium problems arising in financial, economics, transportation, elasticity, optimization, pure and applied sciences can be studied via the general variational inequalities; see [4], [11], [12], [13], [14]. We now have a variety of techniques to suggest and analyze various iterative algorithms for solving variational inequalities and the related optimization problems. The fixed-point theory has played an important role in the development of various algorithms for solving variational inequalities. Using the projection operator technique, one usually establishes an equivalence between the variational inequalities and the fixed-point problem. This alternative equivalent formulation was used by Lions and Stampacchia [9] to study the existence of a solution of the variational inequalities. A well known projection method is the extragradient method of Korpelevich [8] which makes two simple projections at each iteration. Its convergence requires only that a solution exists and the monotone operator is Lipschitz continuous. When the operator is not Lipschitz continuous or when the Lipschitz continuous is not known, the extragradient method and its invariant forms require an Armijo-like line search procedure to compute the step size with a new projection need for each trial, which leads to expansive computation. To overcome these difficulties, several modified projection and extragradient-type methods [5], [6], [8], [12], [13], [14], [15], [16], [17], [18], [19] have been suggested and developed for solving variational inequality problems. Inspired and motivated by the research going in this direction, we propose two prediction–correction methods for solving general variational inequalities. The first method makes use of a descent direction to produce the new iterate while the second method generates the corrector by the improved extragradient method. Under certain conditions, the global convergence of both methods is proved. An example is given to illustrate the efficiency of these two predictor-corrector methods.

Section snippets

Preliminaries

We consider the problem of finding u  Rn such that g(u)  K andT(u),g(v)-g(u)0,g(v)K,where T and g are mappings from Rn into itself.

Throughout this paper we assume that T is continuous and g-pseudomonotone on Rn, i.e.,T(u),g(u)-g(u)0T(u),g(u)-g(u)0g(u),g(u)Rn,g is homeomorphism on Rn, i.e., g is bijective, continuous and g−1 is continuous and the solution set of problem (2.1) denoted by S, is non-empty.

Problem (2.1) is called the general variational inequality, which was

Predictor–corrector methods

We propose two prediction–correction methods for solving problem (2.1). In both methods we have the same predictor and the same step-size αk. But the correction steps are different from each other.

  • Step 1.

    Compute:wk=g-1PKguk-ρkTuk,where ρk > 0 satisfies:ρk(T(uk)-T(wk))δg(uk)-g(wk),0<δ<1.

  • Step 2.

    Setεk=ρk(T(wk)-T(uk)),d(uk,ρk)g(uk)-g(wk)+εk,ϕ(uk,ρk)g(uk)-g(wk),d(uk,ρk),

the stepsizeαkγσk,where1γ<2andσkϕ(uk,ρk)d(uk,ρk)2and the next iterate:guIk+1=PKg(uk)-αkduk,ρkorguIIk+1=PKg(uk)-αkρkT(wk).

Remark 3.1

(3.2) implies

Convergence analysis

In this section, we prove the global convergence of the proposed methods. First, we need to prove the following theorem.

Theorem 4.1

Let u be a solution of problem (2.1) and let uI/IIk+1 be the sequences obtained from the proposed methods. Then {uk} is bounded andg(uI/IIk+1)-g(u)2g(uk)-g(u)2-12γ(2-γ)(1-δ)g(uk)-g(wk)2.

Proof

Let u be a solution of problem (2.1), then:g(uI/IIk+1)-g(u)2g(uk)-g(u)2-ϒ(αk)=g(uk)-g(u)2-2γσkg(uk)-g(wk),d(uk,ρk)+γ2σk2d(uk,ρk)2=g(uk)-g(u)2-2γσkϕ(uk,ρk)+γ2σkϕ(uk,ρk)g(uk)-g(u)2-γ

Comparison of two methods

LetΘI(αk)g(uk)-g(u)2-g(uIk+1)-g(u)2andΘII(αk)g(uk)-g(u)2-g(uIIk+1)-g(u)2measure the progress made by the new iterates. Note that we use the identical step size αk in both prediction–correction methods. Then we show the difference in the following theorem.

Theorem 5.1

Let u be an arbitrary solution point of (2.1). Let ΘIk) and ΘIIk) be defined in (5.1), (5.2), respectively. Then we have the following:ΘI(αk)ϒI(αk)ϒ(αk)+g(uk)-αkdk-g(uIk+1)2,ΘII(αk)ϒII(αk)ϒ(αk)+g(uk)-αkdk-g(uIIk+1)2andϒI(αk)ϒII(α

Computational results

To verify the theoretical assertions and the effectiveness of the method II, we consider the following least distance problem:min12x-c2s.t.AxKwhere A  Rn×n, c  Rn and K  Rn is a closed convex set. This problem can be written asmin12x-c2s.t.Ax-ξ=0ξK.The Lagrangian function of problem (6.1) is:L(x,ξ,y)=12x,x-c,x-y,Ax-ξ,thenL(x,ξ,y)L(x,ξ,y)L(x,ξ,y),where (x, ξ, y)  Rn × K × Rn is saddle point of the Lagrangian function. From the above inequalities we can obtain ∀ξ  K:x=A,y+c,ξ-ξ,y

References (19)

There are more references available in the full text version of this article.

Cited by (0)

1

This author was supported by NSFC Grants Nos: 10571083 and 70571033.

2

This author is supported by the Higher Education Commission, Pakistan, through research Grant No: 1-28/HEC/HRD/2005/90.

View full text