Numerical comparison between prediction–correction methods for general variational inequalities
Introduction
In recent years, classical variational inequality and complementarity problems have been extended and generalized to study a wide range of problems arising in mechanics, physics, optimization and applied sciences, see [1], [2], [3], [4], [5], [6], [7], [8], [9], [10], [11], [12], [13], [14], [15], [16], [17], [18], [19]. In 1988, Noor [11] introduced and considered a new class of variational inequalities involving two operators, which is known as the general variational inequalities. It turned out odd-order and non-symmetric obstacle, free, moving, unilateral and equilibrium problems arising in financial, economics, transportation, elasticity, optimization, pure and applied sciences can be studied via the general variational inequalities; see [4], [11], [12], [13], [14]. We now have a variety of techniques to suggest and analyze various iterative algorithms for solving variational inequalities and the related optimization problems. The fixed-point theory has played an important role in the development of various algorithms for solving variational inequalities. Using the projection operator technique, one usually establishes an equivalence between the variational inequalities and the fixed-point problem. This alternative equivalent formulation was used by Lions and Stampacchia [9] to study the existence of a solution of the variational inequalities. A well known projection method is the extragradient method of Korpelevich [8] which makes two simple projections at each iteration. Its convergence requires only that a solution exists and the monotone operator is Lipschitz continuous. When the operator is not Lipschitz continuous or when the Lipschitz continuous is not known, the extragradient method and its invariant forms require an Armijo-like line search procedure to compute the step size with a new projection need for each trial, which leads to expansive computation. To overcome these difficulties, several modified projection and extragradient-type methods [5], [6], [8], [12], [13], [14], [15], [16], [17], [18], [19] have been suggested and developed for solving variational inequality problems. Inspired and motivated by the research going in this direction, we propose two prediction–correction methods for solving general variational inequalities. The first method makes use of a descent direction to produce the new iterate while the second method generates the corrector by the improved extragradient method. Under certain conditions, the global convergence of both methods is proved. An example is given to illustrate the efficiency of these two predictor-corrector methods.
Section snippets
Preliminaries
We consider the problem of finding u∗ ∈ Rn such that g(u∗) ∈ K andwhere T and g are mappings from Rn into itself.
Throughout this paper we assume that T is continuous and g-pseudomonotone on Rn, i.e.,g is homeomorphism on Rn, i.e., g is bijective, continuous and g−1 is continuous and the solution set of problem (2.1) denoted by S∗, is non-empty.
Problem (2.1) is called the general variational inequality, which was
Predictor–corrector methods
We propose two prediction–correction methods for solving problem (2.1). In both methods we have the same predictor and the same step-size αk. But the correction steps are different from each other.
- Step 1.
Compute:where ρk > 0 satisfies:
- Step 2.
Set
Remark 3.1
(3.2) implies
Convergence analysis
In this section, we prove the global convergence of the proposed methods. First, we need to prove the following theorem. Theorem 4.1 Let u∗ be a solution of problem (2.1) and let be the sequences obtained from the proposed methods. Then {uk} is bounded and Proof Let u∗ be a solution of problem (2.1), then:
Comparison of two methods
Letandmeasure the progress made by the new iterates. Note that we use the identical step size αk in both prediction–correction methods. Then we show the difference in the following theorem. Theorem 5.1 Let u∗ be an arbitrary solution point of (2.1). Let ΘI(αk) and ΘII(αk) be defined in (5.1), (5.2), respectively. Then we have the following:and
Computational results
To verify the theoretical assertions and the effectiveness of the method II, we consider the following least distance problem:where A ∈ Rn×n, c ∈ Rn and K ⊂ Rn is a closed convex set. This problem can be written asThe Lagrangian function of problem (6.1) is:thenwhere (x∗, ξ∗, y∗) ∈ Rn × K × Rn is saddle point of the Lagrangian function. From the above inequalities we can obtain ∀ξ ∈ K:
References (19)
A self-adaptive method for solving general mixed variational inequalities
J. Math. Anal. Appl.
(2005)Modification of the extragradient method for solving variational inequalities and certain optimization problems
USSR Comput. Math. Math. Phys.
(1987)- et al.
Self-adaptive projection algorithms for general variational inequalities
Appl. Math. Comput.
(2004) - et al.
Smoothing functions for second-order-cone complementarity problems
SIAM J. Optim.
(2001) - et al.
Numerical Analysis of Variational Inequalities
(1981) Inexact implicit methods for monotone general variational inequalities
Math. Program.
(1999)- et al.
Improvement of some projection methods for monotone variational inequalities
J. Optim. Theory Appl.
(2002) - et al.
A variant of Korpelevich’s method for variational inequalities with a new strategy
Optimization
(1997) The extragradient method for finding saddle points and other problems
Matecon
(1976)
Cited by (0)
- 1
This author was supported by NSFC Grants Nos: 10571083 and 70571033.
- 2
This author is supported by the Higher Education Commission, Pakistan, through research Grant No: 1-28/HEC/HRD/2005/90.