1 Introduction

The (general) LCP belongs to the class of NP-complete problems, since the feasibility problem of linear equations with binary variables can be described as an LCP [15]. The NP-completeness of the LCP has been proved earlier in a different way by Chung in 1989 [2]. Notation of sufficient matrices and the sufficient LCP was introduced in 1989 by Cottle et al. [3]. Furthermore, they proved that the solution set of a sufficient LCP is always convex. \(P_*\)-LCP has been introduced by Kojima et al. [15]. They showed that a \(P_*\)-matrix is column sufficient. Furthermore, they proved that the central path for sufficient LCP exists and it is unique under the assumption that the problem has a strictly feasible solution. Their algorithm has polynomial iteration complexity \(O((1+\kappa )\sqrt{n}L)\), which is still the best complexity result for solving \(P_*(\kappa )\)-LCPs. Subsequently, Guu and Cottle [7] proved that a \(P_*\)-matrix is also row sufficient and therefore the class \(P_*\) is included in the class of sufficient matrices. Soon after that Väliaho proved the reverse inclusion [30]. Therefore \(P_*\) coincides with the class of sufficient matrices.

The computation of the constant \(\kappa \) of a \(P_*\)-matrix (sufficient matrix) is very difficult task. No polynomial time algorithm is known for checking whether a matrix is \(P_*(\kappa )\) or not. The best known test for the \(P_*(\kappa )\) property, introduced by Väliaho in [31], is not polynomial. For applying an interior point method (IPM) to an LCP with a \(P_*(\kappa )\)-matrix, we need an initial interior point (or use an infeasible IPM) and one need to know a priori the \(\kappa \) value of the matrix M. An initial interior point can be found by using an embedding model (see [26]), but the a priori knowledge of \(\kappa \) is a too strong assumption, since the computation of the exact value of \(\kappa \) using the only known algorithm for this purpose, the algorithm of Väliaho is an exponential algorithm. Potra and Liu [22] softened this assumption, they modified their IPM in such a way, that we need to know only the sufficiency of the matrix. Nowadays, it is a minimal requirement for a (good) interior point algorithm defined for sufficient LCPs to be independent of the knowledge (or a good estimation) of \(\kappa \). However, there is no interior point algorithm known for sufficient LCPs complexity of which does not depend on \(\kappa \). Whether for sufficient LCPs such interior point algorithm could exist for which the analysis does not depend on \(\kappa \) is an open problem.

Mizuno-Todd-Ye (MTY) interior point algorithm was the first algorithm for linear optimization (LO) having both polynomial complexity and superlinear convergence [20]. This result has been generalized for sufficient LCPs by different authors. The first generalization has been published by Miao [19] in 1995. Although Miao’s result is very good from different points of view (best complexity result in \(l_2\) neighborhood of the central path and is quadratically convergent for nondegenerate problems), but has a significant drawback as it was stated in Potra’s paper in 2014 [23]. Namely, Miao’s algorithm explicitly uses the constant \(\kappa \), since the step length depends on \(\kappa \). This is usually property of short step methods than the predictor–corrector or large neighborhood methods. By presenting a predictor–corrector algorithm that does not depend on \(\kappa \), Potra and Sheng [24] improved Miao’s algorithm. Recently, Kheirfam [14] proposed a predictor–corrector interior point algorithm for horizontal LCPs (HLCPs) based on an equivalent transformation on the centering equations of the central path. Other generalizations of MTY algorithms like that of Illés and Nagy [8] and Potra and Liu [22] do not have this drawback of the Miao’s algorithm. Only problem with Illés and Nagy result, and Potra and Liu is that their complexity results, (the full analysis of their algorithms) heavily depends on the \(P_*(\kappa )\) property. Potra and Liu’s result is slightly better than Illés and Nagy’s in the sense that they do not need to a priori the value of \(\kappa \), just know that the matrix is sufficient. Complexity results published by Tseng [29] in 2000 imply, however, that deciding whether there exists a finite \(\kappa \) for which a given matrix is \(P_*(\kappa )\) is an NP-hard problem. To be precise, Tseng showed that the problem of deciding whether a given integer matrix is not sufficient is NP-complete in the Turing model. Thus even the knowledge of sufficiency of the matrix is a very strong assumption, since can not be checked efficiently.

Illés et al. in a series of papers [9,10,11] showed that there should be such interior point algorithms for sufficient LCPs that can be generalized for LCPs with arbitrary matrices in the EP-theorem form similar to that of Fukuda et al. [6]. Since the sufficiency (or \(P_*(\kappa )\)) property of a matrix can not be checked with a polynomial time algorithm, therefore if the matrix is not semidefinite, and we would like to solve an LCP then we need to apply IPMs similar to that one of Illés et al., otherwise there is no chance to handle practical problems where the property of the given matrix is unknown. Furthermore, there is a need for an upper bound \(\tilde{\kappa }\) on the possible values of the parameter \(\kappa \). Generalized IPMs either solve the LCPs with arbitrary matrix in polynomial time in \(\tilde{\kappa }\), the size of the matrix n and the bit length L of the matrix with integer data, or give a certificate that the matrix is not a \(P_*(\tilde{\kappa })\)-matrix.

An important question is whether the constant \(\kappa \) of a sufficient matrix can be polynomially bounded in terms of the bit size (and the size) of the matrix. De Klerk and Nagy [4] showed an example of P-matrix for which the \(\kappa \) should be at least as large as \(O(2^n)\). Therefore, polynomial complexity results in \(\kappa \) and n of interior point algorithms proved for sufficient LCPs, like that of the result of Kojima et al., do not ensure polynomial complexity of IPMs in the size n and the bit length L of the problem.

Predictor–corrector methods are superlinearly convergent, a very important feature which is one of the reasons for their excellent practical performance. For example, Ye et al. [34] proved that the duality gap of the sequence produced by the MTY algorithm converges quadratically to zero. This result was extended to monotone LCPs that are nondegenerate, in the sense that they have a strictly complementarity solution [33]. High order methods can be treated as a rescaled central path, or differently selected target points (in the variant of the Newton-system). The selection of the target point on the central path influences the obtained decreasing search directions. In this sense (one of) the first higher order IPM, has been published by Jansen et al. [13]. The corresponding Dikin type primal–dual affine scaling algorithms have O(nL) iteration complexity. This result was extended to \(P_*(\kappa )\)-LCP by Illés et al. [12]. Also considering high order methods lead to accelerate the convergence of the duality gap for degenerate problems. In [27] mth order derivatives were used to construct MTY type algorithms with Q-order \(m + 1\) for nondegenerate LCP and \((m + 1)/2\) for degenerate LCP. The complexity of the predictor–corrector algorithm for degenerate LCP from [27] is analyzed in [28].

Although all the above mentioned algorithms operate in small neighborhood of the central path, it is known that the best practical results are achieved by IPMs acting in a wide neighborhood of the central path and have worse iteration complexity. Recent research has shown that the superlinearly convergent IPMs can be designed in the wide neighborhood of the central path with optimal, or close to optimal, computational complexity [25].

Predictor–corrector approach [22] cannot be generalized to sufficient LCP without explicit use of the handicap \(\kappa \) of the problem because this variant uses a pair of neighborhoods, nested one inside the other and the radii of those neighborhoods have to satisfy an inequality that depends on the handicap \(\kappa \) of the problem. To overcome this difficulty, Potra [21] introduced the idea of corrector–predictor method. This method uses only one neighborhood of the central path, avoiding thus the explicit relation between the radii of the neighborhoods. This scheme achieves twin goals of improving centrality and reducing the duality gap into a single corrector step. It is followed by a predictor step which reduces duality gap further. Potra’s result [21] was refined in [17] where the proposed high order corrector–predictor method uses wide neighborhood of the central path, does not depend on \(\kappa \), has \( O((1+\kappa )\sqrt{n}L)\) iteration complexity, and is superlinearly convergent for general sufficient LCP.

Ai and Zhang [1] introduced a new wide neighborhood \(\mathcal {N}_{2,\tau }^-(\alpha )\) and proposed a predictor–corrector method for solving monotone LCP. It was the first algorithm in the wide neighborhood of the central path that enjoys the low iteration bound of \( O(\sqrt{n}L)\). Later, Li and Terlakey [16] extended the Ai and Zhang’s technique to semidefinite optimization (SDO). This method was generalized to second-order cone optimization (SOCO) by Feng [5]. In 2013, Liu et al. [18] extended Ai and Zhang’s idea to symmetric cone optimization (SO). Recently, Potra [23] presented three interior-point algorithms for sufficient horizontal LCP (HLCP) acting in Ai and Zhang’s wide neighborhood of the central path. Potra’s second order corrector–predictor algorithm achieved superlinear convergence with Q-order 1.5 for degenerate problems and Q-order 3 for nondegenerate problems. Motivated by Potra’s works [17, 23], we present two interior-point algorithms for solving \(P_*(\kappa )\)-LCP acting in the wide neighborhood \(\mathcal {N}_{2,\tau }^-(\alpha )\). In order to get additional efficiency, we use high order derivatives of the central path both in the corrector and predictor step. We prove that our corrector–predictor algorithm is superlinearly convergent with Q-order \((m_p+1)\) for nondegenerate problems and \((m_p+1)/2\) for general problems.

The paper is organized as follows. In Sect. 2, we introduce \(P_*(\kappa )\)-LCP and review some basic concepts of interior point methods for solving \(P_*(\kappa )\)-LCP, such as central path and the neighborhoods of the central path. In Sect. 3, we state a high order large update path following algorithm and give several technical results that are used in analysis of polynomial complexity, and then we establish worst case iteration complexity of the proposed algorithm. A high order corrector–predictor method was presented in Sect. 4 and its complexity bound is derived. Finally, Some conclusion are provided in Sect. 5.

2 The \(P_*(\kappa )\)-LCP and wide neighborhood

Given a matrix \(M\in {R}^{n\times n}\) and a vector \(q\in {R}^{n}\), the linear complementarity problem (LCP) seeks a vector pair \((x, s)\in R^{2n}\) which satisfies the following constraints:

$$\begin{aligned} -Mx+s=q,~xs=0,~~~x, s\ge 0. \end{aligned}$$
(2.1)

Throughout this paper, we assume that M is a \(P_*(\kappa )\)-matrix, in the sense that

$$\begin{aligned} (1+4\kappa )\sum _{i\in I_+} x_i(Mx)_i+\sum _{i\in I_-} x_i(Mx)_i\ge 0,~~ {\forall } x\in {R}^n, \end{aligned}$$
(2.2)

where \(\kappa \) is a nonnegative number, \(I_+=\{i: x_i(Mx)_i\ge 0\} ~\mathrm{and}~ I_-=\{i: x_i(Mx)_i<0\}\) are two index sets. If the above condition is satisfied we say that problem (2.1) is a \(P_*(\kappa )\)-LCP. The smallest \(\kappa \) with the property (2.2) is called the handicap of the matrix. The class of \(P_*(\kappa )\)-matrices was first introduced by Kojima et al. [15]. They established the existence of the central path and designed and analyzed IPMs for solving \(P_*(\kappa )\)-LCP.

For simplicity of notation, we denote by \(\mathcal {F}:=\big \{(x, s)\in R^{2n}: s=Mx+q, (x, s)\ge 0\big \}\) and \(\mathcal {F}^0:=\mathcal {F}\bigcap R^{2n}_{+}\)-positive vectors respectively the feasible set and the strictly feasible set of the problem (2.1), respectively. We also define the following two sets:

$$\begin{aligned} \mathcal {F}^*:=\big \{(x^*, s^*)\in \mathcal {F}: (x^*)^Ts^*=0\big \},~~~ \mathcal {F}^c:=\big \{(x^*, s^*)\in \mathcal {F}^*: x^*+s^*>0\big \}. \end{aligned}$$

Clearly, \(\mathcal {F}^*\) is the solution set for problem (2.1). We call \(\mathcal {F}^c\) the strictly complementarity solution set of the problem (2.1). The \(P_*(\kappa )\)-LCP is said to be nondegenerate if \(\mathcal {F}^c\) is nonempty, degenerate otherwise. It is well known that better superlinear convergence result can be obtained if the problem is nondegenerate. Hence, we consider these two cases separately, and give a unified treatment of the two cases by introducing the parameter

$$\begin{aligned} \sigma :=\Bigg \{ \begin{array}{ll} 1,&{}~~~\mathrm{{for~general~}}P_*(\kappa )\mathrm{{-LCP~(default~option)}};~~~~~~~~~~~\\ 0,&{}~~~\mathrm{{if~the~}} P_*(\kappa )\mathrm{{-LCP~is~known~to~be~nondegenerate}}. \end{array} \end{aligned}$$
(2.3)

It is proved by Kojima et al. in [15] that the following central path problem

$$\begin{aligned} -Mx+s= & {} q,~~(x,s)>0,\\ xs= & {} \mu e,~~~~~~~~~~~~~~ \end{aligned}$$

has a unique solution \((x(\mu ), s(\mu ))\) for any barrier parameter \(\mu >0\), assuming that the \(\mathcal {F}^0\) is nonempty, and the set of all such solutions is called the central path of the \(P_*(\kappa )\)-LCP, i.e.,

$$\begin{aligned} \mathcal {C}:=\big \{(x(\mu ), s(\mu )),~~\mu >0\big \}. \end{aligned}$$

As \(\mu \rightarrow 0\), this central solution converges to a solution of the problem (2.1) (Theorem 4.4 in [15]). The distance of a point \(z=(x, s)\in \mathcal {F}\) to the central path can be quantified by different proximity measures. The following proximity measure has been used by couple of authors (see, for example, [21, 23]):

$$\begin{aligned} \delta ^-_{\infty }(z):=\Big \Vert \Big (\frac{xs}{\mu }-e\Big )^-\Big \Vert _{\infty }, \end{aligned}$$

where \((v)^-\) denotes the negative part of the vector v, i.e., \((v)^- =-\max \{-v, 0\}\) and \(\mu =\frac{x^Ts}{n}.\) The corresponding neighborhood of the central path to the above proximity measure is defined as:

$$\begin{aligned} \mathcal {N}^{-}_{\infty }(\alpha )=\left\{ z\in \mathcal {F}^0 :\delta ^-_{\infty }(z)\le \alpha \right\} = \{z\in \mathcal {F}^0:xs\ge (1-\alpha )\mu e\} , \end{aligned}$$

where \(0<\alpha <1\) is a given parameter. In this paper, we restrict the iterates to the following neighborhood introduced by Ai and Zhang in [1]:

$$\begin{aligned} {\mathcal {N}}_{2,\tau }^-(\alpha )=\big \{z\in \mathcal {F}^0:\delta ^-_{2,\tau }(z)\le \alpha \big \},~\mathrm{where}~~\delta ^-_{2,\tau }:=\Big \Vert \Big (\frac{xs}{\tau \mu }-e\Big )^-\Big \Vert . \end{aligned}$$

One can easily verify that

$$\begin{aligned} \mathcal {N}_\infty ^-(1-\tau )\subset \mathcal {N}_{2,\tau }^{-}(\alpha )\subset \mathcal {N}_\infty ^{-}(1-(1-\alpha )\tau ), ~~~\forall \alpha ,\tau \in (0,1), \end{aligned}$$

which implies

$$\begin{aligned} xs\ge (1-\alpha )\tau \mu e,~~~\forall ~(x, s)\in \mathcal {N}_{2,\tau }^-(\alpha ). \end{aligned}$$
(2.4)

Since \(\mathcal {N}_\infty ^-(1-\tau )\) is a wide neighborhood, so is \(\mathcal {N}_{2,\tau }^-(\alpha )\). In this stage, it is worth nothing that we will use the neighborhood \({\mathcal {N}}_{2,\tau }^-(\alpha )\) for \(\alpha \in (0, 1)\), whereas in [1] this neighborhood is used for \(\alpha \in (0, \frac{1}{2}]\).

3 A higher order large update path following algorithm

Ai and Zhang [1] presented a large update path following algorithm for solving monotone LCP. Their algorithm uses \(\mathcal {N}_{2,\tau }^-(\alpha )\) neighborhood and decomposes the classical Newton direction, from xs to the target on the central path \(\tau \mu e\), into two orthogonal directions \((\tau \mu e-xs)^-\) and \((\tau \mu e-xs)^+\) using different steplength for each of them. In 2014, Potra [23] generalized the proposed algorithm in [1] to sufficient HLCP. Potra presented three algorithms, a large update path following method, a first order corrector–predictor and a second order corrector–predictor. Our large update path following algorithm proposed in this section acts in Ai-Zhang’s wide neighborhood and uses \((\tau \mu e-xs)^-+\sqrt{n}(\tau \mu e-xs)^+\) to define new search direction. We will also consider high order variant of the large update algorithm.

3.1 Algorithmic framework

Given a point \(z=(x, s)\in \mathcal {N}_{2,\tau }^-(\alpha )\) at each step of algorithm, we define an mth order vector valued polynomial (the step length is on the exponent m) as follows:

$$\begin{aligned} z(\theta )=(x(\theta ), s(\theta ))=z+\sum _{i=1}^mw^i\theta ^i, \end{aligned}$$
(3.1)

where the vectors \(w^i=(u^i, v^i)\) are obtained as solutions of the following linear systems:

$$\begin{aligned}&\Bigg \{ \begin{array}{l} Mu^1-v^1=0,~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\ su^1+xv^1=(\tau \mu e-xs)^-+\sqrt{n}(\tau \mu e-xs)^+, \end{array}\nonumber \\&\Bigg \{ \begin{array}{l} Mu^2-v^2=0,~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\ su^2+xv^2=-u^1v^1,~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \end{array}\nonumber \\&\Bigg \{ \begin{array}{l} Mu^i-v^i=0,~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\ su^i+xv^i=-\sum \nolimits _{j=1}^{i-1}u^jv^{i-j},~~~~~i=3,\ldots ,m. \end{array} \end{aligned}$$
(3.2)

Note that the m linear system in (3.2) has unique solution because x and s are positive vectors and M is a \(P_0\)-matrix [15]. As the m linear systems in (3.2) have the same coefficient matrix, so only one matrix factorization and m backsolves are needed. Hence, it involves \(O(n^3+mn^2)\) arithmetic operations.

Given a point (3.1), we want to choose the step size \(\bar{\theta }\) such that we have \(\mu (\theta )=\frac{x(\theta )^Ts(\theta )}{n}\) as small as possible while still keeping the point in the neighborhood \(\mathcal {N}_{2,\tau }^-(\alpha )\). These goals can be achieved only if we define

$$\begin{aligned} \theta _1= & {} \max \Big \{\theta \in [0, 1]: x(\theta )>0,~ s(\theta )>0\Big \},~~~~~~~~~ \end{aligned}$$
(3.3)
$$\begin{aligned} \theta _2= & {} \max \Big \{\theta \in [0, 1]: \delta ^-_{2,\tau }(z(\theta ))\le \alpha , ~\theta \in [0, \theta _1]\Big \}, \end{aligned}$$
(3.4)
$$\begin{aligned} \bar{\theta }= & {} \mathrm{{argmin}}\Big \{\mu (z(\theta )): \theta \in \big [0,\theta _2\big ]\Big \}.~~~~~~~~~~~~~~~~~~~ \end{aligned}$$
(3.5)

According to definition \(\bar{\theta }\) the point

$$\begin{aligned} \bar{z}=(\bar{x}, \bar{s}):=(x(\bar{\theta }), s(\bar{\theta })) \end{aligned}$$
(3.6)

belongs to the wide neighborhood \(\mathcal {N}_{2,\tau }^-(\alpha )\) and the process can be iterated.

The computation of exact value of \(\bar{\theta }\) for \(m\ge 2\) is complicated, so that good lower bounds of the exact solution can be obtained by a line search procedure. For simplicity, we will assume that the exact value of \(\bar{\theta }\) is available in the following algorithm (Fig. 1).

Fig. 1
figure 1

The algorithm 1

3.2 Polynomial complexity

In order to achieve the iteration complexity bound for the proposed Algorithm 1, we need some technical results.

Lemma 3.1

(cf. Lemma 4 in [32]) Let \((x, s)\in \mathcal {N}_{2,\tau }^-(\alpha )\), then

$$\begin{aligned} \big \Vert (xs)^{-\frac{1}{2}}\big ((\tau \mu e-xs)^-+\sqrt{n}(\tau \mu e-xs)^+\big )\big \Vert ^2\le \Big (1+\frac{\alpha ^2\tau }{1-\alpha }\Big )n\mu . \end{aligned}$$

Lemma 3.2

Let \((x, s)\in \mathcal {N}_{2,\tau }^-(\alpha )\), then

$$\begin{aligned} (\tau -1)n\mu \le e^T\big ((\tau \mu e-xs)^-+\sqrt{n}(\tau \mu e-xs)^+\big )\le (\tau +\alpha \tau -1)n\mu . \end{aligned}$$

Proof

Since

$$\begin{aligned} e^T\big ((\tau \mu e-xs)^-+\sqrt{n}(\tau \mu e-xs)^+\big )=e^T(\tau \mu e-xs)+(\sqrt{n}-1)e^T(\tau \mu e-xs)^+, \end{aligned}$$

we have

$$\begin{aligned} (\tau -1)n\mu =e^T(\tau \mu e-xs)\le & {} e^T(\tau \mu e-xs)+(\sqrt{n}-1)e^T(\tau \mu e-xs)^+\\\le & {} (\tau -1)n\mu +(\sqrt{n}-1)\Vert e\Vert \Vert (\tau \mu e-xs)^+\Vert \\\le & {} (\tau -1)n\mu +(\sqrt{n}-1)\sqrt{n}~\alpha \tau \mu ~~~~~~~~~~~~~\\\le & {} (\tau +\alpha \tau -1)n\mu ,~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \end{aligned}$$

where the second inequality follows from the Cauchy-Schwartz inequality and the third inequality follows from \((x, s)\in \mathcal {N}_{2,\tau }^-(\alpha )\). We finish the proof. \(\square \)

Lemma 3.3

(cf. Lemma 3.4 in [23]) If LCP is \(P_{*}(k)\), then for any \((x, s)\in R_{+}^{2n}\)-positive vector and any \(a\in R^n\) the linear system

$$\begin{aligned} Mu-v= & {} 0,\\ xv+su= & {} a, \end{aligned}$$

has a unique solution (uv) for which the following estimates hold:

$$\begin{aligned}&\mathrm{(i)}~-\kappa \Vert \widetilde{a}\Vert ^2\le u^Tv\le \frac{1}{4}\Vert \widetilde{a}\Vert ^2,~~\mathrm{(ii)}~\Vert Du\Vert ^2+\Vert D^{-1}v\Vert ^2\le (1+2\kappa )\Vert \widetilde{a}\Vert ^2,\\&\mathrm{(iii)}~\Vert uv\Vert \le \Big (\frac{1}{\sqrt{8}}+\kappa \Big )\Vert \widetilde{a}\Vert ^2\le \frac{1}{2}(1+2\kappa )\Vert \widetilde{a}\Vert ^2,~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \end{aligned}$$

where \(\tilde{a}=(xs)^{-1/2}a\) and \(D= X^{-1/2}S^{1/2}\) with \(X=\mathrm{diag}(x), S=\mathrm{diag}(s).\)

The following lemma is a slight improvement over the corresponding results in [17]. We first note that from (3.1) and (3.2) it follows that

$$\begin{aligned} x(\theta )s(\theta )= & {} \left( x+\sum _{i=1}^m\theta ^i u^i\right) \left( s+\sum _{i=1}^m\theta ^i v^i\right) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nonumber \\= & {} (x+\theta ^1 u^1+\cdots +\theta ^m u^m)(s+\theta ^1 v^1+\cdots +\theta ^m v^m)~~~~~~\nonumber \\= & {} xs+\theta ^1(xv^1+su^1)+\theta ^2(xv^2+su^2+u^1v^1)~~~~~~~~~~~~~~\nonumber \\&+\cdots +\theta ^m\left( xv^m+su^m+\sum _{j=1}^{i-1}u^jv^{i-j}\right) \nonumber \\&+\,\,\theta ^{m+1}(u^1v^m+u^2v^{m-1}+\cdots +u^mv^1)+\cdots +\theta ^{2m}u^mv^m~~~\nonumber \\= & {} xs+\theta ((\tau \mu e-xs)^-+\sqrt{n}(\tau \mu e-xs)^+)+\sum _{i=m+1}^{2m}\theta ^ih^i, \end{aligned}$$
(3.7)
$$\begin{aligned} \mu (\theta )= & {} \mu +\frac{\theta e^T}{n}((\tau \mu e-xs)^-+\sqrt{n}(\tau \mu e-xs)^+)+\sum _{i=m+1}^{2m}\theta ^i\frac{e^Th^i}{n},~~~~ \end{aligned}$$
(3.8)

where \(h^i=\sum _{j=i-m}^mu^jv^{i-j}\).

Lemma 3.4

If LCP is \(P_{*}(k)\), and \((x, s)\in \mathcal {N}_{2,\tau }^-(\alpha )\), then directions \((u^i, v^i)\) in (3.2) satisfy

$$\begin{aligned} \sqrt{\frac{\Vert Du^i\Vert ^2+\Vert D^{-1}v^i\Vert ^2}{1+2\kappa }}\le \eta _i\le \frac{2\alpha _i\sqrt{(1-\alpha )\tau \mu }}{1+2\kappa } \Bigg (\frac{(1+2\kappa )\sqrt{1+\frac{\alpha ^2\tau }{1-\alpha }}\sqrt{n}}{2\sqrt{(1-\alpha )\tau }}\Bigg )^i, \end{aligned}$$
(3.9)

where \(\eta _i=\Vert Du^i+D^{-1}v^i\Vert \) and

$$\begin{aligned} \alpha _i=\frac{1}{i} \Bigg ( \begin{array}{cccc} 2i-2\\ i-1 \end{array} \Bigg )\le \frac{1}{i}4^i \end{aligned}$$

is the solution of the following recurrence scheme:

$$\begin{aligned} \alpha _1=1,~~~\alpha _i=\sum _{j=1}^{i-1}\alpha _j\alpha _{i-j}. \end{aligned}$$

Proof

By using the second equations of (3.2) and part (i) of Lemma 3.3 we get

$$\begin{aligned} \Vert Du^i\Vert ^2+\Vert D^{-1}v^i\Vert ^2= & {} \eta _i^2-2{u^i}^Tv^i\le \eta _i^2+2\kappa \Vert \tilde{a}\Vert ^2~~~~\\= & {} \eta _i^2+2\kappa \Vert (xs)^{-1/2}(su^i+xv^i)\Vert ^2\\= & {} \eta _i^2+2\kappa \Vert Du^i+D^{-1}v^i\Vert ^2~~~~~~~~\\= & {} \eta _i^2+2\kappa \eta _i^2. \end{aligned}$$

This implies the first inequality in Lemma. By multiplying the second equations of (3.2) with \((xs)^{-1/2}\) again, we obtain

$$\begin{aligned} Du^1 + D^{-1}v^1= & {} (xs)^{-1/2}((\tau \mu e-xs)^-+\sqrt{n}(\tau \mu e-xs)^+),\\ Du^2 + D^{-1}v^2= & {} -(xs)^{-1/2} u^1v^1,\\ Du^i + D^{-1}v^i= & {} -(xs)^{-1/2}\sum _{j=1}^{i-1}u^jv^{i-j},~~~i=3, \ldots , m. \end{aligned}$$

Taking norm on both sides of the above equations, we respectively obtain

$$\begin{aligned} \eta _1=\Vert (xs)^{-1/2}((\tau \mu e-xs)^-+\sqrt{n}(\tau \mu e-xs)^+)\Vert \le \sqrt{\left( 1+\frac{\alpha ^2\tau }{1-\alpha }\right) n\mu }, \end{aligned}$$

where the inequality follows from Lemma 3.1, and

$$\begin{aligned} \eta _2= & {} \Vert (xs)^{-1/2} u^1v^1\Vert \le \Vert (xs)^{-1/2}\Vert _{\infty }\Vert u^1v^1\Vert \le \frac{\Vert u^1v^1\Vert }{\sqrt{(1-\alpha )\tau \mu }}~~~~~~\\\le & {} \frac{(1+2\kappa )}{2\sqrt{(1-\alpha )\tau \mu }}\Vert (xs)^{-1/2}((\tau \mu e-xs)^-+\sqrt{n}(\tau \mu e-xs)^+)\Vert ^2\\\le & {} \frac{(1+2\kappa )\left( 1+\frac{\alpha ^2\tau }{1-\alpha }\right) n}{2\sqrt{(1-\alpha )\tau }} \sqrt{\mu }. \end{aligned}$$

These show that the second inequality in (3.9) holds for \(i=1, 2\). For \(i=3, \ldots , m,\) we have

$$\begin{aligned} \eta _i\le & {} \frac{1}{\sqrt{(1-\alpha )\tau \mu }}\sum _{j=1}^{i-1}\Vert Du^j\Vert \Vert D^{-1}v^{i-j}\Vert ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nonumber \\\le & {} \frac{1}{2\sqrt{(1-\alpha )\tau \mu }}\sum _{j=1}^{i-1}\Big (\big \Vert Du^j\big \Vert \big \Vert D^{-1}v^{i-j}\big \Vert +\big \Vert Du^{i-j}\big \Vert \big \Vert D^{-1}v^j\big \Vert \Big ). \end{aligned}$$
(3.10)

Now, by using \(ab+cd\le \sqrt{a^2+c^2}\sqrt{b^2+d^2}\) holds for all \(a, b, c, d\ge 0\) we get

$$\begin{aligned}&\big \Vert Du^j\big \Vert \big \Vert D^{-1}v^{i-j}\big \Vert +\big \Vert Du^{i-j}\big \Vert \big \Vert D^{-1}v^j\big \Vert ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\&\quad \le \sqrt{\big \Vert Du^j\big \Vert ^2+\big \Vert D^{-1}v^j\big \Vert ^2}\sqrt{\big \Vert Du^{i-j}\big \Vert ^2+\big \Vert D^{-1}v^{i-j}\big \Vert ^2}~\\&\quad \le (1 + 2\kappa )\eta _j\eta _{i-j}, \end{aligned}$$

where the last inequality follows from the first inequality of (3.9). Substitution of this bound into (3.10) yields

$$\begin{aligned} \eta _i\le \frac{(1+2\kappa )}{2\sqrt{(1-\alpha )\tau \mu }}\sum _{j=1}^{i-1}\eta _j\eta _{i-j.} \end{aligned}$$

The required inequalities are then easily proved by recursions of the \(\alpha _i^,s\) and mathematical induction. \(\square \)

By virtue of Lemma 3.4 we obtain the following bound for \(\Vert h^i\Vert \).

Lemma 3.5

If LCP is \(P_{*}(k)\) and \((x, s)\in \mathcal {N}_{2,\tau }^-(\alpha )\), then

$$\begin{aligned} \Vert h^i\Vert \le \frac{2(1-\alpha )\tau \mu }{(1+2\kappa )i} \Bigg (\frac{2(1+2\kappa )\sqrt{1+\frac{\alpha ^2\tau }{1-\alpha }}}{\sqrt{(1-\alpha )\tau }}\sqrt{n}\Bigg )^i,~~i=m+1, \ldots , 2m. \end{aligned}$$

Proof

Using exactly the same arguments as in the proof of Lemma 4.4 in [17], for any \(m + 1\le i\le 2m\), we have

$$\begin{aligned} \big \Vert h^i\big \Vert\le & {} \frac{1}{2}\sum _{j=1}^{i-1}\sqrt{\big \Vert Du^j\big \Vert ^2 +\big \Vert D^{-1}v^{j}\big \Vert ^2} \sqrt{\big \Vert Du^{i-j}\big \Vert ^2+\big \Vert D^{-1}v^{i-j}\big \Vert ^2}\\\le & {} \frac{2(1-\alpha )\tau \mu }{1+2\kappa } \Bigg (\frac{(1+2\kappa )\sqrt{1+\frac{\alpha ^2\tau }{1-\alpha }}\sqrt{n}}{2\sqrt{(1-\alpha )\tau }}\Bigg )^i \sum _{j=1}^{i-1}\alpha _j\alpha _{i-j}~~~~~~~~~\\= & {} \frac{2(1-\alpha )\tau \mu }{1+2\kappa } \Bigg (\frac{(1+2\kappa )\sqrt{1+\frac{\alpha ^2\tau }{1-\alpha }}\sqrt{n}}{2\sqrt{(1-\alpha )\tau }}\Bigg )^i\alpha _i~~~~~~~~~~~~~~~~~~~~\\\le & {} \frac{2(1-\alpha )\tau \mu }{(1+2\kappa )i} \left( \frac{2(1+2\kappa )\sqrt{1+\frac{\alpha ^2\tau }{1-\alpha }}\sqrt{n}}{\sqrt{(1-\alpha )\tau }}\right) ^i, \end{aligned}$$

where the second inequality follows from Lemma 3.4 and the last inequality follows from the fact that \(\alpha _i\le \frac{1}{i}4^i\). This completes the proof. \(\square \)

Corollary 3.6

If LCP is \(P_{*}(k)\) and \((x, s)\in \mathcal {N}_{2,\tau }^-(\alpha )\), then the following relations hold for any \(\delta >0\)

$$\begin{aligned}&\mathrm{(i)}~\frac{\delta }{\mu }\sum _{i=m+1}^{2m}\theta ^i\big \Vert h^i\big \Vert<1,~~~~0\le \theta \le \frac{\sqrt{(1-\alpha )\tau }\min \Bigg \{1,\Big (\frac{1.4\delta (1-\alpha )\tau }{1+2\kappa }\Big )^{\frac{-1}{m+1}}\Bigg \} }{2(1+2\kappa )\sqrt{1+\frac{\alpha ^2\tau }{1-\alpha }}\sqrt{n}}.~~~~~~~~~~\\&\mathrm{(ii)}~\frac{\delta }{\mu \sqrt{n}}\sum _{i=m+1}^{2m}\theta ^i\big \Vert h^i\big \Vert <\theta ,\\&\quad \quad 0\le \theta \le \frac{\sqrt{(1-\alpha )\tau }\min \Big \{1,\Big (2.8\delta \sqrt{(1-\alpha )\tau }\sqrt{1+\frac{\alpha ^2\tau }{1-\alpha }}\Big ) ^{\frac{-1}{m}}\Big \}}{2(1+2\kappa )\sqrt{1+\frac{\alpha ^2\tau }{1-\alpha }}\sqrt{n}}. \end{aligned}$$

Proof

Using Lemma 3.5 and the inequality \(\sum _{i=m+1}^{2m}\frac{t^i}{i}<0.7t^{m+1}\), for \(t\in (0, 1]\) [21], we obtain, for \(0\le \theta \le \frac{\sqrt{(1-\alpha )\tau }}{2(1+2\kappa )\sqrt{1+\frac{\alpha ^2\tau }{1-\alpha }}\sqrt{n}}\),

$$\begin{aligned} \sum _{i=m+1}^{2m}\theta ^i\big \Vert h^i\big \Vert <\frac{1.4(1-\alpha )\tau \mu }{1+2\kappa } \Bigg (\frac{2(1+2\kappa )\sqrt{1+\frac{\alpha ^2\tau }{1-\alpha }}\sqrt{n}}{\sqrt{(1-\alpha )\tau }}\theta \Bigg )^{m+1}. \end{aligned}$$
(3.11)

Therefore, by using (3.11) it follows that

$$\begin{aligned} \frac{\delta }{\mu }\sum _{i=m+1}^{2m}\theta ^i\Vert h^i\Vert <1 \end{aligned}$$

if

$$\begin{aligned} 0\le \theta \le \frac{\sqrt{(1-\alpha )\tau }}{2(1+2\kappa )\sqrt{1+\frac{\alpha ^2\tau }{1-\alpha }}\sqrt{n}}\min \Big \{1,\Big (\frac{1.4\delta (1-\alpha )\tau }{1+2\kappa }\Big )^{\frac{-1}{m+1}}\Big \}. \end{aligned}$$

We also get

$$\begin{aligned} \frac{\delta }{\mu \sqrt{n}}\sum _{i=m+1}^{2m}\theta ^i\Vert h^i\Vert< & {} \frac{1.4\delta (1-\alpha )\tau }{(1+2\kappa )\sqrt{n}} \Bigg (\frac{2(1+2\kappa )\sqrt{1+\frac{\alpha ^2\tau }{1-\alpha }}\sqrt{n}}{\sqrt{(1-\alpha )\tau }}\theta \Bigg )^{m+1}\\= & {} 2.8\delta \sqrt{(1-\alpha )\tau }\sqrt{1+\frac{\alpha ^2\tau }{1-\alpha }}\theta \Bigg (\frac{2(1+2\kappa )\sqrt{1+\frac{\alpha ^2\tau }{1-\alpha }}\sqrt{n}}{\sqrt{(1-\alpha )\tau }}\theta \Bigg )^{m}\le \theta ,~~~~~~~~~~~~~ \end{aligned}$$

if

$$\begin{aligned} 0\le \theta \le \frac{\sqrt{(1-\alpha )\tau }}{2(1+2\kappa )\sqrt{1+\frac{\alpha ^2\tau }{1-\alpha }}\sqrt{n}} \min \left\{ 1,\Big (2.8\delta \sqrt{(1-\alpha )\tau }\sqrt{1+\frac{\alpha ^2\tau }{1-\alpha }}\Big )^{\frac{-1}{m}}\right\} . \end{aligned}$$

This completes the proof. \(\square \)

3.3 Fixing the step size \(\bar{\theta }\)

Lemma 3.7

If \((x, s)\in \mathcal {N}_{2,\tau }^-(\alpha )\) with \(0<\tau \le \frac{1}{4}\) and \(0<\alpha <1\), then quantities \(\theta _1\) and \(\theta _2\) defined in (3.3) and (3.4), satisfy

$$\begin{aligned} \theta _1\ge & {} \theta _3:=\frac{\sqrt{(1-\alpha )\tau }}{2(1+2\kappa )\sqrt{1+\frac{\alpha ^2\tau }{1-\alpha }}\sqrt{n}}\Bigg (\frac{2.8}{1+2\kappa }\Bigg )^{\frac{-1}{m+1}},\nonumber \\ \theta _2\ge & {} \theta _4:=\frac{\sqrt{(1-\alpha )\tau }}{2(1+2\kappa )\sqrt{1+\frac{\alpha ^2\tau }{1-\alpha }}\sqrt{n}} \Bigg (\frac{11.2\sqrt{(1-\alpha )\tau }\sqrt{1+\frac{\alpha ^2\tau }{1-\alpha }}}{\alpha \tau }\Bigg ) ^{\frac{-1}{m}}, \end{aligned}$$
(3.12)

which implies that \(\bar{\theta }\ge \theta _4\).

Proof

Using (3.7) and the fact that \((\tau \mu e-xs)^+\ge 0\) we deduce that

$$\begin{aligned} x(\theta )s(\theta )= & {} xs+\theta ((\tau \mu e-xs)+(\sqrt{n}-1)(\tau \mu e-xs)^+)+\sum _{i=m+1}^{2m}\theta ^ih^i~\\\ge & {} (1-\theta )xs+\theta \tau \mu e+\sum _{i=m+1}^{2m}\theta ^ih^i\ge (1-\theta )(1-\alpha )\tau \mu e-\sum _{i=m+1}^{2m}\theta ^i\big \Vert h^i\big \Vert e\\\ge & {} (1-\theta )(1-\alpha )\tau \mu e-\frac{\mu }{\delta }e, \end{aligned}$$

where the second inequality follows from (2.4) and \(-\Vert a\Vert e\le a\) for each \(a\in R^n\) and the last inequality comes from Corollary 3.6 (i). By substituting \(\delta =\frac{2}{(1-\alpha )\tau }\) in Corollary 3.6 (i) and the fact that \(\theta _3<\frac{1}{2}\) we conclude that \(x(\theta )s(\theta )>0\) for each \(0\le \theta \le \theta _3\), which implies that \(x(\theta )\ne 0\) and \(s(\theta )\ne 0\). Since \(x(0)>0\) and \(s(0)> 0\), by continuity, \(x(\theta )>0\) and \(s(\theta )>0\) for all \(0\le \theta \le \theta _3\). This proves that \(\theta _1\ge \theta _3\).

Now, we show that \(\delta ^-_{2,\tau }(z(\theta ))\le \alpha \) for every \(0\le \theta \le \theta _4.\) To this end, from (3.7) and (3.8) we obtain

$$\begin{aligned} x(\theta )s(\theta )-\tau \mu (\theta )e= & {} xs-\tau \mu e+\theta ((\tau \mu e-xs)^-+\sqrt{n}(\tau \mu e-xs)^+)\\&-\frac{\tau \theta e^T}{n}((\tau \mu e-xs)^-+\sqrt{n}(\tau \mu e-xs)^+)e\\&+\sum _{i=m+1}^{2m}\theta ^ih^i-\sum _{i=m+1}^{2m}\tau \theta ^i\frac{e^Th^i}{n}e\\= & {} (1-\theta )(xs-\tau \mu e)^++(1-\theta \sqrt{n})(xs-\tau \mu e)^{-}\\&-\frac{\tau \theta e^T}{n}((\tau \mu e-xs)^-+\sqrt{n}(\tau \mu e-xs)^+)e\\&+\sum _{i=m+1}^{2m}\theta ^i\Big (h^i-\frac{\tau e^Th^i}{n}e\Big ).\\ \end{aligned}$$

Using Lemma 3.2 we have \(-e^T((\tau \mu e-xs)^-+\sqrt{n}(\tau \mu e-xs)^+)\ge (1-\alpha \tau -\tau )>0\). Therefore, we obtain

$$\begin{aligned} x(\theta )s(\theta )-\tau \mu (\theta )e\ge (1-\theta \sqrt{n})(xs-\tau \mu e)^-+\sum _{i=m+1}^{2m}\theta ^i\Big (h^i-\frac{\tau e^Th^i}{n}e\Big )^{-}.\quad \end{aligned}$$
(3.13)

Now, by using the inequalities, for any \(u, v\in R^n\), \(v\ge u\) implies \(\Vert v^-\Vert \le \Vert u^-\Vert \) and moreover \(\Vert (u+v)^-\Vert \le \Vert u^-\Vert +\Vert v^-\Vert \) (Lemma 3.3 in [23]), the inequality (3.13) implies that

$$\begin{aligned} \big \Vert \big (x(\theta )s(\theta )-\tau \mu (\theta )e\big )^-\big \Vert\le & {} (1-\theta \sqrt{n})\big \Vert (xs-\tau \mu e)^-\big \Vert \nonumber \\&+\Big \Vert \sum _{i=m+1}^{2m}\theta ^i\Big (h^i-\frac{\tau e^Th^i}{n}e\Big )^-\Big \Vert \nonumber \\\le & {} (1-\theta \sqrt{n})\big \Vert (xs-\tau \mu e)^-\big \Vert +\Big \Vert \sum _{i=m+1}^{2m}\theta ^i\Big (h^i-\frac{\tau e^Th^i}{n}e\Big )\Big \Vert ~~\nonumber \\\le & {} (1-\theta \sqrt{n})\alpha \tau \mu +\sum _{i=m+1}^{2m}\theta ^i\big \Vert h^i\big \Vert . \end{aligned}$$
(3.14)

On the other hand, from (3.8) we have

$$\begin{aligned} \mu (\theta )= & {} \mu +\frac{\theta }{n}e^T((\tau \mu e-xs)+(\sqrt{n}-1)(\tau \mu e-xs)^+)+\sum _{i=m+1}^{2m}\theta ^i\frac{e^Th^i}{n}\nonumber \\\ge & {} \mu +\theta (\tau -1)\mu -\frac{1}{\sqrt{n}}\sum _{i=m+1}^{2m}\theta ^i\big \Vert h^i\big \Vert \nonumber \\= & {} \Big (1-\theta (1-\tau )-\frac{1}{\mu \sqrt{n}}\sum _{i=m+1}^{2m}\theta ^i\big \Vert h^i\big \Vert \Big )\mu . \end{aligned}$$
(3.15)

By using (3.14), (3.15), \(n\ge 3\) and Corollary 3.6 (ii) with \(\delta =\frac{4}{\alpha \tau }\) we deduce that the following inequality holds for any \(0\le \theta \le \theta _4\).

$$\begin{aligned}&\big \Vert \big (x(\theta )s(\theta )-\tau \mu (\theta )e\big )^-\big \Vert -\alpha \tau \mu (\theta )\le (1-\theta \sqrt{n})\alpha \tau \mu \nonumber \\&\quad +\sum _{i=m+1}^{2m}\theta ^i\big \Vert h^i\big \Vert -\alpha \tau \mu \Big (1-\theta (1-\tau )-\frac{1}{\mu \sqrt{n}}\sum _{i=m+1}^{2m}\theta ^i\big \Vert h^i\big \Vert \Big )\nonumber \\&\quad =\theta \sqrt{n}\alpha \tau \mu \Big (-1+\frac{1-\tau }{\sqrt{n}}\Big ) +\Big (1+\frac{\alpha \tau }{\sqrt{n}}\Big )\sum _{i=m+1}^{2m}\theta ^i\big \Vert h^i\big \Vert \nonumber \\&\quad \le \theta \sqrt{n}\alpha \tau \mu \Big (-1+\frac{1-\tau }{\sqrt{n}}\Big ) +\Big (1+\frac{\alpha \tau }{\sqrt{n}}\Big )\frac{\theta \sqrt{n}\mu }{\delta }\nonumber \\&\quad \le \theta \sqrt{n}\alpha \tau \mu \Big (-1+\frac{1}{\sqrt{n}}+ \frac{1.15}{4}\Big )<0. \end{aligned}$$
(3.16)

Since \(\theta _4\le \theta _3\), thus we finish the proof; that is, \((x(\theta ), s(\theta ))\in \mathcal {N}_{2,\tau }^-(\alpha )\), for any \(0\le \theta \le \theta _4\). \(\square \)

Theorem 3.8

If LCP is \(P_{*}(k)\), then the Algorithm 1 is well defined, produces a sequence of points \(\bar{z}^k\) belonging to the neighborhood \(\mathcal {N}_{2,\tau }^-(\alpha )\), and

$$\begin{aligned} \bar{\mu }_{k+1}\le \Big (1-\frac{\chi (\alpha , \tau , m)}{(1+2\kappa )\sqrt{n}}\Big )\bar{\mu }_{k},~~~k=0,1,\ldots , \end{aligned}$$

where

$$\begin{aligned} \chi (\alpha , \tau , m)=\frac{7(\alpha \tau )^{\frac{1}{m}}((1-\alpha )\tau )^{\frac{1}{2}-\frac{1}{2m}}}{32\root m \of {11.2}\big (1+\frac{\alpha ^2\tau }{1-\alpha }\big )^{\frac{1}{2}+\frac{1}{2m}}}.~~~~~~~~~ \end{aligned}$$

Proof

According to (3.8) and using Lemma 3.2 we deduce that

$$\begin{aligned} \mu (\theta )\le & {} \mu +\theta (\tau +\alpha \tau -1)\mu +\frac{1}{\sqrt{n}}\sum _{i=m+1}^{2m}\theta ^i\Vert h^i\Vert \nonumber \\\le & {} \mu +\theta (\tau +\alpha \tau -1)\mu +\frac{\theta \alpha \tau }{4}\mu \le \Big (1-\frac{7\theta }{16}\Big )\mu , \end{aligned}$$
(3.17)

where the second inequality follows from \(\sum _{i=m+1}^{2m}\theta ^i\Vert h^i\Vert \le \frac{\theta \sqrt{n}\alpha \tau \mu }{4}\) for \(0\le \theta \le \theta _4\), and the last inequality comes from Corollary 3.6 (ii) and the fact that \(0<\alpha <1\) and \(0<\tau \le \frac{1}{4}\). From the definition of \(\bar{\theta }\) [see (3.5)] and (3.17) it follows that

$$\begin{aligned} \mu (\bar{\theta })\le \mu (\theta _4)\le \Big (1-\frac{7\theta _4}{16}\Big )\mu . \end{aligned}$$

This completes the proof. \(\square \)

Corollary 3.9

Under the hypothesis of Theorem 3.8, Algorithm 1 produced a point \({\bar{z}}\in \mathcal {N}_{2,\tau }^-(\alpha )\) with \({\bar{\mu }}(z)\le \epsilon \) in at most \((O(1+\kappa )\sqrt{n}L)\) iterations, where \(L=\log \Big (\frac{\bar{\mu }(z^0)}{\epsilon }\Big )\).

4 A higher order corrector–predictor algorithm

4.1 Algorithmic framework

In order to devise an algorithm that is independent of the handicap of the problem the idea of corrector–predictor method was introduced in [17, 21]. In a corrector–predictor variant only one neighborhood of the central path is used, whose radius can be any number between 0 and 1, and therefore does not depend on \(\kappa \). Potra [23] presented a second-order corrector–predictor method acting in \(\mathcal {N}_{2,\tau }^-(\alpha )\) neighborhood for sufficient horizontal LCP. In this section, we propose and analyze a higher order corrector–predictor algorithm in the neighborhood \(\mathcal {N}_{2,\tau }^-(\alpha )\) of the central path for \(P_*(\kappa )\)-LCP.

4.1.1 The corrector step

The line search on the corrector direction has to be done in such a way that it optimizes the decrease of the proximity measure \(\delta ^-_{2,\tau }\) to the central path, moreover, it improves optimality. At a typical iteration of the algorithm we are given a point \((x,s)\in \mathcal {N}_{2,\tau }^-(\alpha )\) and we compute the search directions \((u^i,v^i)\) by solving the linear systems in (3.2). Then we consider the point \(z(\theta )\) defined in (3.1). The step size \(\theta _c\) in the corrector step is chosen as

$$\begin{aligned}&\theta _c=\mathrm{{{arg}\min }}_{0\le \theta \le 1}~\delta ^-_{2,\tau }(z(\theta ))\nonumber \\&\quad \,\,~ \mathrm{{s.t}}~~~z(\theta )\in \mathcal {N}_{2,\tau }^-(\alpha ),\nonumber \\&\quad ~~~~ 0<\mu (\theta )\le \left( 1-\frac{7}{16}\theta \right) \mu . \end{aligned}$$
(4.1)

According to Lemma 3.7, the corresponding “corrected point” satisfies

$$\begin{aligned} \bar{z}=(\bar{x},\bar{s}):=(x(\theta _c),s(\theta _c))\in \mathcal {N}_{2,\tau }^-(\bar{\alpha }), ~~{ with}~~~ \bar{\alpha }=\delta ^-_{2,\tau }(z(\theta _c))<\alpha . \end{aligned}$$
(4.2)

While the parameter \(\alpha \) is fixed during the algorithm, the positive quantity \(\bar{\alpha }\) varies from iteration to iteration. However, we will prove that there is a constant \(\alpha ^*<\alpha \), such that \(\bar{\alpha }<\alpha ^*\) in all iterations. We take \(m=m_c\) in the corrector step and \(m=m_p\) in the predictor step.

4.1.2 The predictor step

In a predictor step, we take the point \((\bar{x},\bar{s})\) obtained in the corrector step as a starting point, and compute the directions \(\bar{w}^i=(\bar{u}^i, \bar{v}^i)\) by solving the linear systems

$$\begin{aligned}&\Bigg \{ \begin{array}{cccc} M\bar{u}^1-\bar{v}^1=0,~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\ \bar{s}\bar{u}^1+\bar{x}\bar{v}^1=-(1+\sigma )\bar{x}\bar{s},~~~~~~~~~~~~~~~~~~~~~~~~~~~ \end{array}\nonumber \\&\Bigg \{ \begin{array}{cccc} M\bar{u}^2-\bar{v}^2=0,~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\ \bar{s}\bar{u}^2+\bar{x}\bar{v}^2=\sigma \bar{x}\bar{s}-\bar{u}^1\bar{v}^1,~~~~~~~~~~~~~~~~~~~~~~~~~~~ \end{array}\nonumber \\&\Bigg \{ \begin{array}{cccc} Mu^i-v^i=0,~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\ \bar{s}\bar{u}^i+\bar{x}\bar{v}^i=-\sum \nolimits _{j=1}^{i-1}\bar{u}^j\bar{v} ^{i-j},~~~~~i=3, \ldots , m_p. \end{array} \end{aligned}$$
(4.3)

For \(\xi \in (0, 1)\) we define

$$\begin{aligned} \bar{z}(\xi )=(\bar{x}(\xi ), \bar{s}(\xi )):=\bar{z}+\sum _{i=1}^{m_p}\bar{w}^i\xi ^i. \end{aligned}$$
(4.4)

The aim of predictor step is to decrease the complementarity gap as much as possible while keeping the iterates in \(\mathcal {N}_{2,\tau }^-(\alpha )\). This is accomplished by defining the predictor step length as

$$\begin{aligned}&\xi _p=\mathrm{{arg}{\min }}_{0\le \xi \le 1}\bar{\mu }(\xi )\nonumber \\&\quad ~~~\mathrm{{s.t.}}~~~\bar{z}(\xi )\in \mathcal {N}_{2,\tau }^-(\alpha ). \end{aligned}$$
(4.5)

As a result of the predictor step, we obtain a point

$$\begin{aligned} z^+=(x^+, s^+):=(\bar{x}(\xi _p), \bar{s}(\xi _p)). \end{aligned}$$
(4.6)

By construction we have \((x^+, s^+)\in \mathcal {N}_{2,\tau }^-(\alpha )\) , so that a new corrector step can be applied. Summing up we can formulate the following iterative procedure (Fig. 2).

Fig. 2
figure 2

The algorithm 2

4.2 Polynomial complexity

The following Lemma gives an upper bound for the quantity \(\bar{\alpha }\) defined as in (4.2).

Lemma 4.1

Suppose that \((x, s)\in \mathcal {N}_{2,\tau }^-(\alpha )\) with \(0<\alpha <1\) and \(0<\tau \le \frac{1}{4}\), then the point \(\bar{z}=z(\theta _c)\) obtained by the corrector step belongs to the neighborhood \(\mathcal {N}_{2,\tau }^-(\bar{\alpha })\) with \(\bar{\alpha }\le (1-\tilde{\rho })\alpha \), where \(\tilde{\rho }=\frac{\sqrt{n}\theta _4}{8}\) and \(\theta _4\) defined in (3.12).

Proof

Let \(0<\rho \le \frac{\sqrt{n}\theta }{8}, n\ge 3\). Using exactly the same arguments as in the proof of (3.16), we obtain, for any \(0\le \theta \le \theta _4\),

$$\begin{aligned}&\big \Vert \big (x(\theta )s(\theta )-\tau \mu (\theta )e\big )^-\big \Vert -(1-\rho )\alpha \tau \mu (\theta )\le (1-\theta \sqrt{n})\alpha \tau \mu \\&\quad +\sum _{i=m+1}^{2m}\theta ^i\big \Vert h^i\big \Vert -(1-\rho )\alpha \tau \mu \Big (1-\theta (1-\tau )-\frac{1}{\mu \sqrt{n}}\sum _{i=m+1}^{2m}\theta ^i\big \Vert h^i\big \Vert \Big )~~~~~~~~~~~~~\\&\quad \le \alpha \tau \mu (-\theta \sqrt{n}+\rho +\theta (1-\tau )) +\Big (1+\frac{\alpha \tau }{\sqrt{n}}\Big )\sum _{i=m+1}^{2m}\theta ^i\big \Vert h^i\big \Vert ~~~~~~~~~~~~~\\&\quad \le \theta \sqrt{n}\alpha \tau \mu \Big (-1+\frac{1}{8}+\frac{1-\tau }{\sqrt{n}}\Big ) +\Big (1+\frac{\tau \alpha }{\sqrt{n}}\Big )\frac{\theta \sqrt{n}\alpha \tau \mu }{4}~~~~~~~~~~~~~~~~~\\&\quad \le \theta \sqrt{n}\alpha \tau \mu \Big (-1+\frac{1}{8}+\frac{1}{\sqrt{n}} +\frac{1.15}{4}\Big )<0. \end{aligned}$$

From the above inequality, by \(\theta =\theta _4\) and \(\rho =\tilde{\rho }=\frac{\sqrt{n}\theta _4}{8}\), it follows that

$$\begin{aligned} \delta ^-_{2,\tau }(\bar{z})\le \delta ^-_{2,\tau }(z(\theta _4))\le (1-\tilde{\rho })\alpha , \end{aligned}$$

or equivalently \(\bar{z}\in \mathcal {N}_{2,\tau }^-((1-\tilde{\rho })\alpha )\). This completes the proof. \(\square \)

From (4.3) and (4.4) we obtain

$$\begin{aligned} \bar{x}(\xi )\bar{s}(\xi )= & {} (1-\xi )^{(1+\sigma )}\bar{x}\bar{s}+\sum _{i=m+1}^{2m}\xi ^i\bar{h}^i, \end{aligned}$$
(4.7)
$$\begin{aligned} \bar{\mu }(\xi )= & {} (1-\xi )^{(1+\sigma )}\bar{\mu }+\sum _{i=m+1}^{2m}\xi ^i\frac{e^T{\bar{h}}^i}{n}, \end{aligned}$$
(4.8)

where \(\bar{h}^i=\displaystyle \sum _{j=i-m}^m\bar{u}^j\bar{v}~^{i-j}.\)

The following lemma gives an upper bound for \(\Vert \bar{h}^i\Vert \).

Lemma 4.2

The vectors \(\bar{h}^i\) produced by the predictor step at each iteration of the algorithm satisfy

$$\begin{aligned} \Vert \bar{h}^i\Vert \le \frac{2(1-\alpha )\tau \bar{\mu }}{(1+2\kappa )i}\Bigg (\frac{4(1+2\kappa )\sqrt{n}}{\sqrt{(1-\alpha )\tau }}\Bigg )^i, ~n\ge 3,~i=m+1, \ldots , 2m. \end{aligned}$$

Proof

Using the same argument as in the proof of proposition 3 in [21] and using Lemma 3.3 (ii), we obtain

$$\begin{aligned} \Vert \bar{h}^i\Vert \le \frac{1+2\kappa }{2}\sum _{j=1}^{i-1}\bar{\eta }_j\bar{\eta }_{i-j}, \end{aligned}$$
(4.9)

where \(\bar{\eta }_i=\Vert \bar{D}\bar{u}^i+\bar{D}^{-1}\bar{v}^i\Vert \) with \(\bar{D}=\bar{X}^{-\frac{1}{2}}\bar{S}^{\frac{1}{2}}\). In the sequel, we derive upper bounds for \(\bar{\eta }_i, i=1, \ldots , m\). For \(i=1\), from the second equation of the first system of (4.3), we get

$$\begin{aligned} \bar{\eta }_1=\big \Vert (\bar{x}\bar{s})^{\frac{1}{2}}(1+\sigma )\big \Vert =(1+\sigma )\sqrt{n\bar{\mu }}. \end{aligned}$$

For \(i=2\), from the second system of (4.3), due to the fact that \((\bar{x}, \bar{s})\in \mathcal {N}^-_{2,\tau }(\bar{\alpha })\subseteq \mathcal {N}^-_{2,\tau }(\alpha )\) and using the triangle inequality and Lemma 3.3(iii), we obtain

$$\begin{aligned} \bar{\eta }_2= & {} \big \Vert (\bar{x}\bar{s})^{-\frac{1}{2}}(\sigma \bar{x}\bar{s}-\bar{u}^1\bar{v}^1)\big \Vert \le \sigma \sqrt{n\bar{\mu }}+\frac{\Vert \bar{u}^1\bar{v}^1\Vert }{\sqrt{(1-\alpha )\tau \bar{\mu }}}\\\le & {} \frac{n\sqrt{\bar{\mu }}}{\sqrt{(1-\alpha )\tau }}\Bigg (\frac{\sigma \sqrt{(1-\alpha )\tau }}{\sqrt{3}} +\Big (\frac{1}{\sqrt{8}}+\kappa \Big )(1+\sigma )^2\Bigg )\\\le & {} \frac{n\sqrt{\bar{\mu }}}{\sqrt{(1-\alpha )\tau }}\Bigg (\frac{(1+\sigma )^2}{2}+\kappa (1+\sigma )^2\Bigg ) =\frac{(1+\sigma )^2(1+2\kappa )n\sqrt{\bar{\mu }}}{2\sqrt{(1-\alpha )\tau }}. \end{aligned}$$

For \(3\le i\le m\), from the third system of (4.3), we have

$$\begin{aligned} \bar{\eta }_i=\Big \Vert -(\bar{x}\bar{s})^{-\frac{1}{2}}\sum _{j=1}^{i-1}{\bar{u}}^j{\bar{v}}^{i-j}\Big \Vert\le & {} \frac{1}{\sqrt{(1-\alpha )\tau \bar{\mu }}}\sum _{j=1}^{i-1}\Vert {\bar{D}}{\bar{u}}^j\Vert \Vert {\bar{D}}^{-1}{\bar{v}}^{i-j}\Vert \\\le & {} \frac{1+2\kappa }{2\sqrt{(1-\alpha )\tau \bar{\mu }}}\sum _{j=1}^{i-1}\bar{\eta }_j\bar{\eta }_{i-j}, \end{aligned}$$

where the last inequality follows by the same argument as in the proof of Lemma 3.4. One easily verifies, by induction, that

$$\begin{aligned} \bar{\eta }_i\le \frac{2\alpha _i\sqrt{(1-\alpha )\tau \bar{\mu }}}{1+2\kappa } \Bigg (\frac{(1+\sigma )(1+2\kappa )\sqrt{n}}{2\sqrt{(1-\alpha )\tau }}\Bigg )^i, \end{aligned}$$

where \(\alpha _i\)’s are defined as in Lemma 3.4. Substitution of this bound into (4.9) yields

$$\begin{aligned} \Vert \bar{h}^i\Vert\le & {} \frac{2(1-\alpha )\tau \bar{\mu }}{1+2\kappa } \Bigg (\frac{(1+\sigma )(1+2\kappa )\sqrt{n}}{2\sqrt{(1-\alpha )\tau }}\Bigg )^i \sum _{j=1}^{i-1}\alpha _j\alpha _{i-j}\\\le & {} \frac{2(1-\alpha )\tau \bar{\mu }}{(1+2\kappa )i} \Bigg (\frac{2(1+\sigma )(1+2\kappa )\sqrt{n}}{\sqrt{(1-\alpha )\tau }}\Bigg )^i.~~~~~~~~~~~ \end{aligned}$$

where the last inequality follows from recursions of the \({\alpha ^,_i}s\) and \(\alpha _i\le \frac{4^i}{i}\). We finish the proof. \(\square \)

Corollary 4.3

If LCP is \(P_{*}(k)\) and \((\bar{x}, \bar{s})\in \mathcal {N}_{2,\tau }^-(\bar{\alpha })\), then the following relations hold for any \(\delta >0\) and \(\kappa \ge 0\)

$$\begin{aligned}&\mathrm{(i)}~\frac{\delta }{\bar{\mu }}\sum _{i=m+1}^{2m}\xi ^i\Vert \bar{h}^i\Vert<1,~~~~0\le \xi \le \frac{\sqrt{(1-\alpha )\tau }\min \Big \{1,\Big (\frac{1.4\delta (1-\alpha )\tau }{1+2\kappa }\Big )^\frac{-1}{m+1}\Big \}}{4(1+2\kappa )\sqrt{n}}.~~~~~~~~~~\\&\mathrm{(ii)}~\frac{\delta }{\bar{\mu }\sqrt{n}}\sum _{i=m+1}^{2m}\xi ^i\Vert \bar{h}^i\Vert <\xi , ~0\le \xi \le \frac{\sqrt{(1-\alpha )\tau }\min \Big \{1,\Big (5.6\delta \sqrt{(1-\alpha )\tau }\Big ) ^{\frac{-1}{m}}\Big \}}{4(1+2\kappa )\sqrt{n}}. \end{aligned}$$

Proof

The proof of this corollary is similar to the proof of Corollary 3.6 and is therefore omitted. \(\square \)

In the rest of this section, we obtain a lower bound for the length of the predictor step \(\xi _p\) defined in (4.5). For this purpose, we first need to keep the iterates in the neighborhood, so we define

$$\begin{aligned} \bar{\xi }=\max \big \{\hat{\xi }\in [0, 1]: \bar{z}(\xi )\in \mathcal {N}_{2,\tau }^-(\alpha ),~~\forall \xi \in [0,\hat{\xi }]\big \}. \end{aligned}$$
(4.10)

Lemma 4.4

Suppose that the corrector point \((\bar{x}, \bar{s})\in \mathcal {N}_{2,\tau }^-(\bar{\alpha })\), then the maximum step size \(\bar{\xi }\) given by (4.10) satisfies

$$\begin{aligned} \bar{\xi }\ge \xi _2:= \frac{\sqrt{(1-\alpha )\tau }}{4(1+2\kappa )\sqrt{n}}\min \Bigg \{\Big (\frac{5.6}{3(1+2\kappa )}\Big )^\frac{-1}{m_p+1}, \Big (\frac{2.8(1-\alpha )}{\tilde{\rho }\alpha (1+2\kappa )}\Big )^\frac{-1}{m_p+1}\Bigg \}. \end{aligned}$$

Proof

From (4.7) and the fact that \((\bar{x}, \bar{s})\in \mathcal {N}_{2,\tau }^-(\bar{\alpha })\) it follows that

$$\begin{aligned} \bar{x}(\xi )\bar{s}(\xi )\ge & {} (1-\xi )^{(1+\sigma )}\bar{x}\bar{s}-\sum _{i=m+1}^{2m}\xi ^i\Vert \bar{h}^i\Vert e\nonumber \\\ge & {} (1-\xi )^{(1+\sigma )}(1-\bar{\alpha })\tau \bar{\mu }e-\sum _{i=m+1}^{2m}\xi ^i\Vert \bar{h}^i\Vert e. \end{aligned}$$
(4.11)

By using Corollary 4.3(i) with \(\delta =\frac{4}{3(1-\alpha )\tau }\), we have

$$\begin{aligned} \sum _{i=m+1}^{2m}\xi ^i\Vert \bar{h}^i\Vert \le \frac{\bar{\mu }}{\delta }=\frac{3(1-\alpha )\tau \bar{\mu }}{4}, ~~~ 0<\xi \le \xi _1:=\frac{\sqrt{(1-\alpha )\tau }\Big (\frac{5.6}{3(1+2\kappa )}\Big )^\frac{-1}{m_p+1}}{4(1+2\kappa )\sqrt{n}}. \end{aligned}$$

Since \(0<\alpha <1\), \(0<\tau \le \frac{1}{4}\) and \(n\ge 3\), we have \(\xi _1<\frac{1}{8\sqrt{3}}\). Due to the obvious fact that

$$\begin{aligned} 1-2\xi \le (1-\xi )^{1+\sigma }\le 1-\xi ,~~~\forall \xi \in (0, 1],~~\sigma \in \{0, 1\}, \end{aligned}$$

we obtain

$$\begin{aligned} (1-\xi )^{1+\sigma }>1-\frac{2}{8\sqrt{3}}>\frac{3}{4},~~ \forall \xi \in (0, \xi _1], ~~\sigma \in \{0, 1\}. \end{aligned}$$

Substitution of these two bounds into (4.11) yields

$$\begin{aligned} \bar{x}(\xi )\bar{s}(\xi )>\frac{3}{4}(1-\alpha )\tau \bar{\mu }e-\frac{3}{4}(1-\alpha )\tau \bar{\mu }e=0. \end{aligned}$$

By using a continuity argument similar to the one in the proof of Lemma 3.7 we deduce that \(\bar{x}(\xi )>0\) and \(\bar{s}(\xi )>0\), for any \(\xi \in (0, \xi _1]\). Since \(\bar{s}(\xi )=M\bar{x}(\xi )+q\) it follows that \({\bar{z}}(\xi )\in \mathcal {F}^0\) for any \(\xi \in (0, \xi _1]\). According to (4.7) and (4.8) we have

$$\begin{aligned} \big \Vert \big (\bar{x}(\xi )\bar{s}(\xi )-\tau \bar{\mu }(\xi )e\big )^-\big \Vert\le & {} (1-\xi )^{(1+\sigma )}\big \Vert (\bar{x}\bar{s}-\tau \bar{\mu } e)^-\big \Vert \\&+\Big \Vert \sum _{i=m+1}^{2m} \xi ^i\Big (\bar{h}^i-\frac{\tau e^T\bar{h}^i}{n}e\Big )^-\Big \Vert \\\le & {} (1-\xi )^{(1+\sigma )}\tau \bar{\alpha }\bar{\mu }+\sum _{i=m+1}^{2m}\xi ^i\big \Vert \bar{h}^i\big \Vert ,\\ \bar{\mu }(\xi )\ge & {} (1-\xi )^{(1+\sigma )}\bar{\mu }-\sum _{i=m+1}^{2m}\frac{\xi ^i\big \Vert \bar{h}^i\big \Vert }{\sqrt{n}}. \end{aligned}$$

Furthermore, from Lemma 4.1 and Corollary 4.3(i) with \(\delta =\frac{2}{{\tilde{\rho }}\alpha \tau }\) we deduce that

$$\begin{aligned}&\big \Vert \big (\bar{x}(\xi )\bar{s}(\xi )-\tau \bar{\mu }(\xi )e\big )^-\big \Vert -\alpha \tau \bar{\mu }(\xi )\nonumber \\&\quad \le (1-\xi )^{(1+\sigma )}(\bar{\alpha }-\alpha )\tau \bar{\mu } +\Big (1+\frac{\alpha \tau }{\sqrt{n}}\Big )\sum _{i=m+1}^{2m}\xi ^i\big \Vert \bar{h}^i\big \Vert \nonumber \\&\quad<-(1-\xi )^{(1+\sigma )}{\tilde{\rho }}\alpha \tau \bar{\mu }+1.15\sum _{i=m+1}^{2m}\xi ^i\big \Vert \bar{h}^i\big \Vert \nonumber \\&\quad<-\frac{3}{4}{\tilde{\rho }}\alpha \tau \bar{\mu }+\frac{1.15}{2}{\tilde{\rho }}\alpha \tau \bar{\mu }<0. \end{aligned}$$
(4.12)

Therefore

$$\begin{aligned}&\bar{z}(\xi )\in \mathcal {N}_{2,\tau }^-(\alpha ), 0\le \xi \le \frac{\sqrt{(1-\alpha )\tau }}{4(1+2\kappa )\sqrt{n}}\min \Bigg \{\Big (\frac{5.6}{3(1+2\kappa )}\Big )^\frac{-1}{m_p+1}, \Big (\frac{2.8(1-\alpha )}{\tilde{\rho }\alpha (1+2\kappa )}\Big )^\frac{-1}{m_p+1}\Bigg \}. \end{aligned}$$

This completes the proof. \(\square \)

Theorem 4.5

If LCP is \(P_*(\kappa )\), then Algorithm 2 is well defined and produces a sequence of points \(z^k=(x^k, s^k)\) belonging to the neighborhood \({\mathcal {N}}_{2,\tau }^-(\alpha )\). Moreover

$$\begin{aligned} \mu _{k+1}\le \Bigg (1-\frac{\bar{\chi }(\alpha ,\tau , m_c, m_p)}{(1+2\kappa )\sqrt{n}}\Bigg ) \bar{\mu }_{k},~~~~k=0, 1, \ldots \end{aligned}$$

where

$$\begin{aligned}&\bar{\chi }(\alpha ,\tau , m_c, m_p)\\&\quad =\frac{\sqrt{(1-\alpha )\tau } \min \Bigg \{\Big (11.2\sqrt{(1-\alpha )\tau }\Big )^{\frac{-1}{m_p}},\Big (\frac{5.6}{3(1+2\kappa )}\Big )^\frac{-1}{m_p+1}, \Big (\frac{2.8(1-\alpha )}{\tilde{\rho }\alpha (1+2\kappa )}\Big )^\frac{-1}{m_p+1}\Bigg \} }{8}. \end{aligned}$$

Proof

Using (4.8) and Corollary 4.3(ii) with \(\delta =2\) we deduce that the inequality

$$\begin{aligned} \bar{\mu }(\xi )\le (1-\xi )^{(1+\sigma )}\bar{\mu } +\sum _{i=m+1}^{2m}\frac{\xi ^i\big \Vert \bar{h}^i\big \Vert }{\sqrt{n}} \le (1-\xi )\bar{\mu }+\frac{\xi \bar{\mu }}{2}=\left( 1-\frac{\xi }{2}\right) \bar{\mu }, \end{aligned}$$

holds for any \(0\le \xi \le \xi _3:=\frac{\sqrt{(1-\alpha )\tau }\big (11.2\sqrt{(1-\alpha )\tau }\big ) ^{\frac{-1}{m_p}}}{4(1+2\kappa )\sqrt{n}}.\) By using the above inequality and Lemma 4.4 we conclude that

$$\begin{aligned} (\bar{x}(\xi ), \bar{s}(\xi ))\in {\mathcal {N}}_{2,\tau }^-(\alpha )~~\mathrm{and}~~\bar{\mu }(\xi )\le \left( 1-\frac{\xi }{2}\right) \bar{\mu }, \end{aligned}$$

hold for any \(0\le \xi \le \tilde{\xi }:=\min \{\xi _2, \xi _3\}\), where \(\xi _2\) defined as in Lemma 4.4. Finally, from the definition \(\xi _p\) in (4.5) it follows that

$$\begin{aligned} \bar{\mu }(\xi _p)\le \bar{\mu }(\tilde{\xi })\le \Bigg (1-\frac{\bar{\chi }(\alpha ,\tau ,m_c,m_p)}{(1+2\kappa )\sqrt{n}}\Bigg )\bar{\mu }. \end{aligned}$$

This completes the proof. \(\square \)

Corollary 4.6

Under the hypothesis of Theorem 4.5, Algorithm 2 produced a point \(z\in \mathcal {N}_{2,\tau }^-(\alpha )\) with \(\mu (z)\le \epsilon \) in at most \((O(1+\kappa )\sqrt{n}L)\) iterations, where \(L=\log (\frac{{\mu }(z^0)}{\epsilon })\).

4.3 Superlinear convergence

To investigate the superlinear convergence of the sequence \(\mu _k\) produced by Algorithm 2 we need following Lemma.

Lemma 4.7

(cf. Lemma 5.1 in [17]) The solution of (4.3) satisfies

$$\begin{aligned} \bar{u}^i=O(\bar{\mu }^i),~~~~\bar{v}^i=O(\bar{\mu }^i),~~~i=1,\ldots ,m_p~~{ if}~P_*(\kappa )~(2.1)~{ is}~{ nondegenerate} \end{aligned}$$

and

$$\begin{aligned} \bar{u}^i=O(\bar{\mu }^{i/2}),~~~~\bar{v}^i=O(\bar{\mu }^{i/2}),~~~i=1,\ldots ,m_p~~{ if}~P_*(\kappa )~(2.1)~{ is}~{ degenerate.} \end{aligned}$$

Theorem 4.8

If LCP is \(P_*(\kappa )\), then the sequence \(\{\mu _k\}\) produced by Algorithm 2 is superlinearly convergent with Q-order \(\frac{m_p+1}{1+\sigma }\), i.e.,

$$\begin{aligned} \mu _{k+1}=O\left( \mu _k^{m_p+1}\right) ,~~{ if}~P_*(\kappa )~(2.1)~{ is}~{ nondegenerate} \end{aligned}$$

and

$$\begin{aligned} \mu _{k+1}=O\left( \mu _k^{(m_p+1)/2}\right) ,~~{ if}~P_*(\kappa )~(2.1)~{ is}~{ degenerate.} \end{aligned}$$

Proof

For simplicity we denote \(m=m_p\). From Lemma 4.7 it follows that there is a constant c independent of \(\kappa \) such that

$$\begin{aligned} \big \Vert \bar{u}^i\big \Vert \le c\bar{\mu }^{\frac{i}{1+\sigma }},~~~\big \Vert \bar{v}^i\big \Vert \le c\bar{\mu }^{\frac{i}{1+\sigma }},~~~i=1,\ldots ,m. \end{aligned}$$

Since \(\lim _{k\rightarrow \infty }\bar{\mu }=0\) we may assume that \(\bar{\mu }\) is small enough, and let \(\varphi \) be a constant so that

$$\begin{aligned} \bar{\mu }<\frac{1}{\varphi }<1,~~~0<\breve{\xi }:=1-\Big (\varphi \bar{\mu }^{\frac{m-\sigma }{1+\sigma }}\Big )^{\frac{1}{1+\sigma }}, ~~~\varphi \ge \frac{3\bar{c}}{2\alpha \tau \tilde{\rho }}. \end{aligned}$$

where \(\tilde{\rho }\) is defined in Lemma 4.1. Therefore, we may conclude that for any \(\xi \in (0, 1]\)

$$\begin{aligned} \big \Vert \bar{h}^i\big \Vert\le & {} \sum _{j=i-m}^m\big \Vert \bar{u}^j\big \Vert \big \Vert \bar{v}~^{i-j}\big \Vert \le (2m-i+1)c^2\bar{\mu }^{\frac{i}{(1+\sigma )}},\\ \sum _{i=m+1}^{2m}\xi ^i\big \Vert \bar{h}^i\big \Vert\le & {} \sum _{i=m+1}^{2m}\big \Vert \bar{h}^i\big \Vert \le c^2\bar{\mu }^{\frac{m+1}{(1+\sigma )}}\sum _{i=m+1}^{2m}(2m-i+1)\\= & {} \frac{m(m+1)c^2}{2}\bar{\mu }^{\frac{m+1}{(1+\sigma )}} =\bar{c}\bar{\mu }^{\frac{m+1}{(1+\sigma )}}. \end{aligned}$$

On the other hand, according to (4.11) and (4.12) we deduce that for any \(\xi \in (0,\breve{\xi }]\)

$$\begin{aligned} \bar{x}(\xi )\bar{s}(\xi )\ge & {} (1-\bar{\alpha })\tau (1-\xi )^{1+\sigma }\bar{\mu }e-\bar{c}\bar{\mu }^{\frac{m+1}{(1+\sigma )}}e\\\ge & {} (1-\alpha )\tau (1-\breve{\xi })^{1+\sigma }\bar{\mu }e-\bar{c}\bar{\mu }^{\frac{m+1}{(1+\sigma )}}e\\= & {} ((1-\alpha )\tau \varphi -\bar{c})\bar{\mu }^{\frac{m+1}{(1+\sigma )}}>0,\\ \big \Vert \big (\bar{x}(\xi )\bar{s}(\xi )-\tau \bar{\mu }(\xi )e\big )^-\big \Vert -\alpha \tau \bar{\mu }(\xi )\le & {} -\alpha \tau \tilde{\rho }(1-\xi )^{1+\sigma }\bar{\mu }+1.15~\bar{c}\bar{\mu }^{\frac{m+1}{(1+\sigma )}}~\\\le & {} (-\alpha \tau \tilde{\rho }\varphi +1.15~\bar{c})\bar{\mu }^{\frac{m+1}{(1+\sigma )}}<0.~~~~~~ \end{aligned}$$

From two above inequalities we deduce that \(\bar{z}(\xi )\in \mathcal {N}_{2,\tau }^-(\alpha )\) for each \(\xi \in (0, \breve{\xi }]\). From (4.8) we obtain

$$\begin{aligned} \bar{\mu }(\breve{\xi }) \le (1-\breve{\xi })^{1+\sigma }\bar{\mu }+\frac{1}{\sqrt{n}}\sum _{i=m+1}^{2m}\breve{\xi }^i\big \Vert \bar{h}^i\big \Vert \le \Big (\varphi +\frac{\bar{c}}{\sqrt{n}}\Big )\bar{\mu }^{\frac{m+1}{(1+\sigma )}} \le \Big (\varphi +\frac{\bar{c}}{\sqrt{3}}\Big )\bar{\mu }^{\frac{m+1}{(1+\sigma )}}. \end{aligned}$$

Finally, by using (4.5),

$$\begin{aligned} \mu _{k+1}=\bar{\mu }_k(\xi _p)\le \bar{\mu }_k(\breve{\xi }) \le \Big (\varphi +\frac{\bar{c}}{\sqrt{3}}\Big )\bar{\mu }_k^{\frac{m+1}{(1+\sigma )}} \le \Big (\varphi +\frac{\bar{c}}{\sqrt{3}}\Big )\mu _k^{\frac{m+1}{(1+\sigma )}}. \end{aligned}$$

This completes the proof. \(\square \)

5 Conclusions

We have presented two interior-point methods for solving \(P_*(\kappa )\) linear complementarity problems acting in the wide neighborhood of the central path introduced by Ai-Zhang. The interior point methods from [1, 18] use the neighborhood \(\mathcal {N}_{2,\tau }^-(\alpha )\) only for \(\alpha \in (0,1/2]\), while in this paper we construct algorithms based on the this neighborhood for any value of \(\alpha \in (0,1)\). Algorithm 1 requires only one matrix factorization and \(m_c\) backsolves, plus the solution of optimization problem (3.5). Algorithm 2 is a corrector–predictor variant. The corrector step of order \(m_c\) carries out the duty of decreasing centrality and complementarity gap. Moreover, a predictor step of order \(m_p\) is applied for decreasing complementarity gap further. The second algorithm uses two matrix factorization and \(m_c+m_p\) backsolves, plus the solutions of optimization problems (4.1) and (4.5). The Q-order of the convergence of the complementarity gap in Algorithm 2 is \((m_p+1)\) for nondegenerate problems and \((m_p+1)/2\) for degenerate problems. The proposed algorithms do not use explicitly the handicap \(\kappa \) of the problem, so it is possible to implement both of the algorithms for solving any \(P_*(\kappa )\)-LCP. We derive that an \(\epsilon \)-approximate solution is available in at most \(O((1+\kappa )\sqrt{n}L)\) iterations for both of the algorithms. The bound matches the currently best known theoretical bound obtained by any interior-point methods for solving \(P_*(\kappa )\)-LCP.

For the future research it would be interesting to investigate following two questions:

  1. 1.

    Could we modify the high order IPMs presented here in such a way that they may be applied to LCPs without any restriction or knowledge about the properties of the coefficient matrices, for instance like those worked out in the papers of Illés et al. [9,10,11]?

  2. 2.

    Are the suggested IPMs discussed here, more efficient and reliable than others? for example those from the paper of Kheirfam [14] and also can we implement these algorithms on the P-matrices mentioned by De Klerk and Nagy [4].