Exploring complexity of large update interior-point methods for linear complementarity problem based on Kernel function
Introduction
After the landmark paper of Karmarkar [17], Linear Optimization (LO) revitalized as an active area of research. Lately the interior-point methods (IPMs) have shown their powers in solving LO problems and large classes of other optimization problems (see [23], [24], [25]. IPMs also are the powerful tools to solve some widely used mathematical problems such as semi-definite optimization (SDO), Second Order Conic Optimization (SOCO) and Linear Complementarity Problem (LCP).
LCPs are one of the most important problems that have many applications in mathematical programming and equilibrium problems. Indeed, it is known that by exploiting the first-order optimality conditions of the optimization problem, any differentiable convex quadratic problem can be formulated into a monotone LCP, i.e. LCP, and vice versa [24]. Variational inequality problems have a relatively close connection with LCPs and widely used in the study of equilibrium problems in, e.g. economics, transportation planning and game theory. The reader can find the basic theory, algorithms and applications in [10].
In this paper, we consider the following LCP:where is a matrix and . The primal-dual IPM for linear optimization was first introduced by Kojima et al. in [18] and extended to wider class of problems such as LCP [9]. The path-following IPMs follow the central path approximately to get center. Existence of the central path for LCP has been proved by Kojima et al. [19]. They also generalized primal-dual IPMs to LCP and established the same complexity results as LO case. Nowadays a good measure to evaluate a new variant of IPMs is the capability of the method to extend to the LCPs [16].
Most of polynomial-time interior-point algorithms for LO use the logarithmic barrier function as a proximity function. Recently, a new variant of feasible IPMs based on Self-Regular (SR) proximity functions was presented by Peng et al. [21]. Based on SR-proximities, they provided so far the best worst case theoretical complexity, namely , for large neighborhood feasible IPMs, for the case when the barrier degree of the corresponding SR function is . Bai et al. [4] proposed a new primal-dual IPM for LO based on a simple kernel function and they get iteration complexity. A comparative study of kernel functions for primal-dual interior-point algorithms in linear optimization has been given by Bai et al. in [2]. The extended complexity analysis of kernel function based algorithm for LO to SOCO and semi-definite LCPs has been provided in [5], [22]. In [14], He et al. introduced a self-adjusting IPM for LCP based on a logarithmic barrier function and gave some numerical results. Recently, we proposed a new class of non self-regular kernel functions for LO and get , with , iteration complexity for large update primal-dual IPMs [1].
The aim of this paper is to extend the kernel functions introduced in [1] for solving LCPs and get the iteration complexity , for . In special case we obtain when we choose . Obviously, this bound is better than the bounds obtained by Cho [6], Cho and Kim [7] and Cho et al. [8] for solving LCPs.
Note that although LCPs are generalization of LO problems, we loose the orthogonality of the components of the search direction vectors, i.e., and . Therefore the analysis of search direction and step size is a little different from LO case.
Our kernel function is strongly convex and is not self-regular and logarithmic barrier function. Here we provide a simple analysis to get iteration complexity of primal-dual IPMs based on our kernel function which is different from the analysis provided to SR and logarithmic barrier kernel functions. We use the proximity function based on this kernel function to measure the proximity between the current iterates and the μ-center in the analysis of the algorithm.
The paper is organized as follows: in Section 2, we recall basic concepts and the notion of the central path. We describe the kernel function and its growth properties for LCPs in Section 3. The analysis of feasible step size and the amount of the decrease in proximity function during an inner iteration are reported in Section 4. In Section 5, we explore the total number of iterations for our algorithm. We have tested the proposed algorithm for some monotone linear complementarity problems in Section 6.
The following notations are used throughout the paper. The nonnegative and positive orthants are denoted by and , respectively. For ; i.e., the minimal component of x. The vector norm is the 2-norm denoted by . We also denote the all one-vector of length n by e. The diagonal matrix with the diagonal vector x has been denoted by . The index set J is . For , xs denotes the coordinate-wise product (Hadamard product) and denotes the scalar product. We say if there exist some positive constants and such that holds for all . Further, if there exists a positive constant such that holds for all .
Section snippets
Preliminary
In this section we review the idea underlying the approach of this paper. The matrix is first introduced by Kojima et al. [19]. Here, we give some definitions, based on [19], about matrix which is the generalization of positive semi-definite matrices. Definition 2.1 Let be a nonnegative number. A matrix is called a matrix iffor all , whereNote that for the is the class of positive
The kernel function and growth behavior
In [1], we introduced the kernel functionfor and , and by using the proximity function induced by this kernel function, we showed that the complexity result for large update IPMs for solving linear optimization problems is
In this section, we investigate and recall some properties of (3.1), which determine the search direction in the Algorithm 1. We also verify the growth behavior of the kernel function (3.1). In the analysis of Algorithm 1, we
A default value for step size
In this section we evaluate the feasible step size α in which the proximity function is decreasing. We also derive an upper bound for the decrease. Note that although LCPs are generalization of LO problems, we loose the orthogonality of the vectors and in this case. Therefore, the analysis of step size is different from LO case.
After a damped step, with step size α, we haveUsing (2.2), we getandThus, we
Decrease of the proximity and complexity
In this section, we first obtain an estimate for the value of , then, by using some technical lemmas, we conclude the iteration complexity.
Computational results
The algorithm was programmed in FORTRAN and tested a P4 Core2 Computer with 2 Mb Ram on six (monotone) linear complementarity problems, which are a subclass of linear complementarity problems. In all cases, convergence was achieved when the following criterions were hold
We have to note that the starting point for these problems has been chosen based on the embedding model introduced in [19]. The algorithm has been implemented for and the
Acknowledgements
This research was in part supported by the grant from Institute for Studies in Theoretical Physics and Mathematics (IPM grant No. 86900017) for the second author. The authors also would like to thank the Research Councils of Razi University and K.N. Toosi University of Technology. A part of this work was done and supported while the second author was visiting the Institute for Mathematical Sciences (IMS), National University of Singapore (NUS) in 2006.
References (25)
- et al.
An interior-point algorithm for linear optimization based on a new kernel function
Southeast Asian Bulletin of Mathematics
(2005) - et al.
A comparative study of kernel functions for primal-dual interior-point algorithms in linear optimization
SIAM Journal on Optimization
(2005) - et al.
A new efficient large-update primal-dual interior-point method based on a finite barrier
SIAM Journal on Optimization
(2003) - et al.
A polynomial-time algorithm for linear optimization based on a new simple kernel function
Optimization Methods and Software
(2003) - et al.
Primal-dual interior-point algorithms for second-order cone optimization based on a new parametric kernel function
Acta Mathematica Sinica English Series
(2007) A new large-update interior point algorithm for linear complementarity problems
Journal of Computational and Applied Mathematics
(2008)- et al.
A new large-update interior point algorithm for LCPs based on kernel functions
Applied Mathematics and Computation
(2006) - et al.
Complexity of large-update interior point algorithm for linear complementarity problems
Computers and Mathematics with Applications
(2007) Log-barrier method for two-stage quadratic stochastic programming
Applied Mathematics and Computation
(2005)- et al.
The Linear Complementarity Problem
(1992)
Nonlinear Programming, Sequential Unconstrained Minimization Techniques
Cited by (18)
A class of polynomial interior-point algorithms for the Cartesian P*(κ) second-order cone linear complementarity problem
2010, Nonlinear Analysis, Theory, Methods and ApplicationsA primal-dual interior-point algorithm based on a new kernel function
2010, ANZIAM JournalNew interior-point methods for P<inf>*</inf>(κ)-nonlinear complementarity problems
2021, Journal of Nonlinear and Convex AnalysisInterior-point methods for P<inf>*</inf>(κ)-Horizontal linear complementarity problems
2019, Journal of Nonlinear and Convex AnalysisA primal-dual large-update interior-point algorithm for P <inf>*</inf>(κ)-LCP based on a new class of kernel functions
2018, Acta Mathematicae Applicatae Sinica