Exploring complexity of large update interior-point methods for P(κ) linear complementarity problem based on Kernel function

https://doi.org/10.1016/j.amc.2008.11.002Get rights and content

Abstract

Interior-point methods not only are the most effective methods in practice but also have polynomial-time complexity. The large update interior-point methods perform in practice much better than the small update methods which have the best known theoretical complexity. In this paper, motivated by the complexity results for linear optimization based on kernel functions, we extend a generic primal-dual interior-point algorithm based on a new kernel function to solve P(κ) linear complementarity problems. By using some elegant and simple tools and having interior-point condition, we show that the large update primal-dual interior-point methods for solving P(κ) linear complementarity problems enjoys Oq(1+2κ)n(logn)q+1qlognε iteration bound which becomes O(1+2κ)nlognlog(logn)lognε with special choices of the parameter q. This bound is much better than the classical primal-dual interior-point methods based on logarithmic barrier function and recent kernel functions introduced by some authors in optimization field. Some computational results have been provided.

Introduction

After the landmark paper of Karmarkar [17], Linear Optimization (LO) revitalized as an active area of research. Lately the interior-point methods (IPMs) have shown their powers in solving LO problems and large classes of other optimization problems (see [23], [24], [25]. IPMs also are the powerful tools to solve some widely used mathematical problems such as semi-definite optimization (SDO), Second Order Conic Optimization (SOCO) and Linear Complementarity Problem (LCP).

LCPs are one of the most important problems that have many applications in mathematical programming and equilibrium problems. Indeed, it is known that by exploiting the first-order optimality conditions of the optimization problem, any differentiable convex quadratic problem can be formulated into a monotone LCP, i.e. P(0) LCP, and vice versa [24]. Variational inequality problems have a relatively close connection with LCPs and widely used in the study of equilibrium problems in, e.g. economics, transportation planning and game theory. The reader can find the basic theory, algorithms and applications in [10].

In this paper, we consider the following LCP:s=Mx+c,xs=0,(x,s)0,where MRn×n is a P(κ) matrix and cRn. The primal-dual IPM for linear optimization was first introduced by Kojima et al. in [18] and extended to wider class of problems such as P(0) LCP [9]. The path-following IPMs follow the central path approximately to get center. Existence of the central path for P(κ) LCP has been proved by Kojima et al. [19]. They also generalized primal-dual IPMs to P(κ) LCP and established the same complexity results as LO case. Nowadays a good measure to evaluate a new variant of IPMs is the capability of the method to extend to the P(κ) LCPs [16].

Most of polynomial-time interior-point algorithms for LO use the logarithmic barrier function as a proximity function. Recently, a new variant of feasible IPMs based on Self-Regular (SR) proximity functions was presented by Peng et al. [21]. Based on SR-proximities, they provided so far the best worst case theoretical complexity, namely Onlognlognε, for large neighborhood feasible IPMs, for the case when the barrier degree of the corresponding SR function is 1+logn. Bai et al. [4] proposed a new primal-dual IPM for LO based on a simple kernel function and they get Onlognε iteration complexity. A comparative study of kernel functions for primal-dual interior-point algorithms in linear optimization has been given by Bai et al. in [2]. The extended complexity analysis of kernel function based algorithm for LO to SOCO and semi-definite LCPs has been provided in [5], [22]. In [14], He et al. introduced a self-adjusting IPM for LCP based on a logarithmic barrier function and gave some numerical results. Recently, we proposed a new class of non self-regular kernel functions for LO and get Oqn(logn)1+1qlognε, with q1, iteration complexity for large update primal-dual IPMs [1].

The aim of this paper is to extend the kernel functions introduced in [1] for solving P(κ) LCPs and get the iteration complexity Oq(1+2κ)n(logn)1+1qlognε, for q1. In special case we obtain O(1+2κ)nlognlog(logn)lognε when we choose q=O(log(logn)). Obviously, this bound is better than the bounds obtained by Cho [6], Cho and Kim [7] and Cho et al. [8] for solving P(κ) LCPs.

Note that although P(κ) LCPs are generalization of LO problems, we loose the orthogonality of the components of the search direction vectors, i.e., dx and ds. Therefore the analysis of search direction and step size is a little different from LO case.

Our kernel function is strongly convex and is not self-regular and logarithmic barrier function. Here we provide a simple analysis to get iteration complexity of primal-dual IPMs based on our kernel function which is different from the analysis provided to SR and logarithmic barrier kernel functions. We use the proximity function Ψ(v) based on this kernel function to measure the proximity between the current iterates and the μ-center in the analysis of the algorithm.

The paper is organized as follows: in Section 2, we recall basic concepts and the notion of the central path. We describe the kernel function and its growth properties for P(κ) LCPs in Section 3. The analysis of feasible step size and the amount of the decrease in proximity function during an inner iteration are reported in Section 4. In Section 5, we explore the total number of iterations for our algorithm. We have tested the proposed algorithm for some monotone linear complementarity problems in Section 6.

The following notations are used throughout the paper. The nonnegative and positive orthants are denoted by R+n and R++n, respectively. For xRn; xmin=min{x1,x2,,xn} i.e., the minimal component of x. The vector norm is the 2-norm denoted by ·. We also denote the all one-vector of length n by e. The diagonal matrix with the diagonal vector x has been denoted by X=diag(x). The index set J is J={1,2,,n}. For x,sRn, xs denotes the coordinate-wise product (Hadamard product) and xTs denotes the scalar product. We say f(t)=θ(g(t)) if there exist some positive constants ω1 and ω2 such that ω1g(t)f(t)ω2g(t) holds for all t>0. Further, f(t)=O(g(t)) if there exists a positive constant ω such that f(t)ωg(t) holds for all t>0.

Section snippets

Preliminary

In this section we review the idea underlying the approach of this paper. The P(κ) matrix is first introduced by Kojima et al. [19]. Here, we give some definitions, based on [19], about P(κ) matrix which is the generalization of positive semi-definite matrices.

Definition 2.1

Let κ0 be a nonnegative number. A matrix MRn×n is called a P(κ) matrix if(1+4κ)iJ+(x)xi(Mx)i+iJ-(x)xi(Mx)i0,for all xRn, whereJ+(x)={iJ:xi(Mx)i0}andJ-(x)={iJ:xi(Mx)i<0}.Note that for κ=0 the P(0) is the class of positive

The kernel function and growth behavior

In [1], we introduced the kernel functionψ(t)=t2-12+1q(et-q-1-1)for t>0 and q1, and by using the proximity function induced by this kernel function, we showed that the complexity result for large update IPMs for solving linear optimization problems isOqn(logn)1+1qlognε.

In this section, we investigate and recall some properties of (3.1), which determine the search direction in the Algorithm 1. We also verify the growth behavior of the kernel function (3.1). In the analysis of Algorithm 1, we

A default value for step size

In this section we evaluate the feasible step size α in which the proximity function is decreasing. We also derive an upper bound for the decrease. Note that although P(κ) LCPs are generalization of LO problems, we loose the orthogonality of the vectors dx and ds in this case. Therefore, the analysis of step size is different from LO case.

After a damped step, with step size α, we havex+=x+αΔx;s+=s+αΔs.Using (2.2), we getx+=x(e+αΔxx)=xe+αdxv=xv(v+αdx),ands+=se+αΔss=se+αdsv=sv(v+αds).Thus, we

Decrease of the proximity and complexity

In this section, we first obtain an estimate for the value of f(α˜), then, by using some technical lemmas, we conclude the iteration complexity.

Computational results

The algorithm was programmed in FORTRAN and tested a P4 Core2 Computer with 2 Mb Ram on six P(0) (monotone) linear complementarity problems, which are a subclass of P(κ) linear complementarity problems. In all cases, convergence was achieved when the following criterions were holdsk-Mxk-c1+c10-8,xkTsk1+xk10-8.

We have to note that the starting point for these problems has been chosen based on the embedding model introduced in [19]. The algorithm has been implemented for q=1,2,3 and the

Acknowledgements

This research was in part supported by the grant from Institute for Studies in Theoretical Physics and Mathematics (IPM grant No. 86900017) for the second author. The authors also would like to thank the Research Councils of Razi University and K.N. Toosi University of Technology. A part of this work was done and supported while the second author was visiting the Institute for Mathematical Sciences (IMS), National University of Singapore (NUS) in 2006.

References (25)

  • K. Amini et al.

    An interior-point algorithm for linear optimization based on a new kernel function

    Southeast Asian Bulletin of Mathematics

    (2005)
  • Y.Q. Bai et al.

    A comparative study of kernel functions for primal-dual interior-point algorithms in linear optimization

    SIAM Journal on Optimization

    (2005)
  • Y.Q. Bai et al.

    A new efficient large-update primal-dual interior-point method based on a finite barrier

    SIAM Journal on Optimization

    (2003)
  • Y.Q. Bai et al.

    A polynomial-time algorithm for linear optimization based on a new simple kernel function

    Optimization Methods and Software

    (2003)
  • Y.Q. Bai et al.

    Primal-dual interior-point algorithms for second-order cone optimization based on a new parametric kernel function

    Acta Mathematica Sinica English Series

    (2007)
  • G.M. Cho

    A new large-update interior point algorithm for P(κ) linear complementarity problems

    Journal of Computational and Applied Mathematics

    (2008)
  • G.M. Cho et al.

    A new large-update interior point algorithm for P(κ) LCPs based on kernel functions

    Applied Mathematics and Computation

    (2006)
  • G.M. Cho et al.

    Complexity of large-update interior point algorithm for P(k) linear complementarity problems

    Computers and Mathematics with Applications

    (2007)
  • G.M. Cho

    Log-barrier method for two-stage quadratic stochastic programming

    Applied Mathematics and Computation

    (2005)
  • R.W. Cottle et al.

    The Linear Complementarity Problem

    (1992)
  • A.V. Fiacco et al.

    Nonlinear Programming, Sequential Unconstrained Minimization Techniques

    (1968)
  • K.R. Frisch, The logarithmic potential method for convex programming, Unpublished Manuscript, University of Oslo,...
  • Cited by (18)

    View all citing articles on Scopus
    View full text