Skip to main content
Log in

Infeasible interior-point method for symmetric optimization using a positive-asymptotic barrier

  • Published:
Computational Optimization and Applications Aims and scope Submit manuscript

Abstract

We propose a new primal-dual infeasible interior-point method for symmetric optimization by using Euclidean Jordan algebras. Different kinds of interior-point methods can be obtained by using search directions based on kernel functions. Some search directions can be also determined by applying an algebraic equivalent transformation on the centering equation of the central path. Using this method we introduce a new search direction, which can not be derived from a usual kernel function. For this reason, we use the new notion of positive-asymptotic kernel function which induces the class of corresponding barriers. In general, the main iterations of the infeasible interior-point methods are composed of one feasibility and several centering steps. We prove that in our algorithm it is enough to take only one centering step in a main iteration in order to obtain a well-defined algorithm. Moreover, we conclude that the algorithm finds solution in polynomial time and has the same complexity as the currently best known infeasible interior-point methods. Finally, we give some numerical results.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

  1. Ahmadi, K., Hasani, F., Kheirfam, B.: A full-Newton step infeasible interior-point algorithm based on Darvay directions for linear optimization. J. Math. Model. Algorithms Oper. Res. 13(2), 191–208 (2014)

    Article  MathSciNet  Google Scholar 

  2. Asadi, S., Mansouri, H.: A new full-Newton step \({O}(n)\) infeasible interior-point algorithm for \({P}_*(\kappa )\)-horizontal linear complementarity problems. Comput. Sci. J. Mold. 22(1), 37–61 (2014)

    MathSciNet  MATH  Google Scholar 

  3. Darvay, Zs.: New interior point algorithms in linear programming. Adv. Model. Optim. 5(1), 51–92 (2003)

  4. Darvay, Zs., Papp, I.M., Takács, P.R.: An infeasible full-Newton step algorithm for linear optimization with one centering step in major iteration. Studia Univ. Babeş-Bolyai Ser. Inform. 59(1), 28–45 (2014)

  5. Darvay, Zs., Papp, I.M., Takács, P.R.: Complexity analysis of a full-Newton step interior-point method for linear optimization. Period. Math. Hung. 73(1), 27–42 (2016)

    Article  MathSciNet  Google Scholar 

  6. Darvay, Zs., Takács, P.R.: New interior-point algorithm for symmetric optimization based on a positive-asymptotic barrier function. Operations Research Report, 2016-01, Eötvös Loránd University of Sciences, Budapest (2016)

  7. de Klerk, E., Roos, C., Terlaky, T.: Initialization in semidefinite programming via a self-dual, skew-symmetric embedding. Oper. Res. Lett. 20(5), 213–221 (1997)

    Article  MathSciNet  Google Scholar 

  8. de Klerk, E.: Aspects of Semidefinite Programming: Interior Point Algorithms and Selected Applications. Kluwer Academic Publishers, Dordrecht (2002)

    Book  Google Scholar 

  9. Faraut, J., Korányi, A.: Analysis on Symmetric Cones. Oxford University Press, New York (1994)

    MATH  Google Scholar 

  10. Faybusovich, L.: Linear systems in Jordan algebras and primal-dual interior-point algorithms. J. Comput. Appl. Math. 86(1), 149–175 (1997)

    Article  MathSciNet  Google Scholar 

  11. Faybusovich, L.: A Jordan-algebraic approach to potential-reduction algorithms. Math. Z. 239(1), 117–129 (2002)

    Article  MathSciNet  Google Scholar 

  12. Gay, D.: Electronic mail distribution of linear programming test problems. Math. Program. Soc. COAL Newsl. 3, 10–12 (1985)

    Google Scholar 

  13. Gu, G., Zangiabadi, M., Roos, C.: Full Nesterov–Todd step infeasible interior-point method for symmetric optimization. Eur. J. Oper. Res. 214(3), 473–484 (2011)

    Article  MathSciNet  Google Scholar 

  14. Güler, O.: Barrier functions in interior point methods. Math. Oper. Res. 21(4), 860–885 (1996)

    Article  MathSciNet  Google Scholar 

  15. Illés, T., Nagy, M., Terlaky, T.: EP theorem for dual linear complementarity problems. J. Optim. Theory Appl. 140(2), 233–238 (2009)

    Article  MathSciNet  Google Scholar 

  16. Illés, T., Nagy, M., Terlaky, T.: A polynomial path-following interior point algorithm for general linear complementarity problems. J. Global Optim. 47(3), 329–342 (2010)

    Article  MathSciNet  Google Scholar 

  17. Kheirfam, B.: A new infeasible interior-point method based on Darvay’s technique for symmetric optimization. Ann. Oper. Res. 211(1), 209–224 (2013)

    Article  MathSciNet  Google Scholar 

  18. Kojima, M., Megiddo, N., Noma, T., Yoshise, A.: A Unified Approach to Interior Point Algorithms for Linear Complementarity Problems. Lecture Notes in Computer Science, vol. 538. Springer, Berlin (1991)

    Book  Google Scholar 

  19. Koulaei, M.H., Terlaky, T.: On the complexity analysis of a Mehrotra-type primal-dual feasible algorithm for semidefinite optimization. Optim. Methods Sofw. 25(3), 467–485 (2010)

    Article  MathSciNet  Google Scholar 

  20. Li, Y., Terlaky, T.: A new class of large neighborhood path-following interior point algorithms for semidefinite optimization with \({O}\left(\sqrt{n}\log \frac{\rm {T}r(x^0s^0)}{\epsilon }\right)\) iteration complexity. SIAM J. Optim. 20(6), 2853–2875 (2010)

    Article  MathSciNet  Google Scholar 

  21. Lustig, I.: Feasibility issues in a primal-dual interior-point method for linear programming. Math. Program. 49(1–3), 145–162 (1990)

    Article  MathSciNet  Google Scholar 

  22. Mehrotra, S.: On the implementation of a primal-dual interior point method. SIAM J. Optim. 2(4), 575–601 (1992)

    Article  MathSciNet  Google Scholar 

  23. Monteiro, R., Zhang, Y.: A unified analysis for a class of long-step primal-dual path-following interior-point algorithms for semidefinite programming. Math. Program. 81(3), 281–299 (1998)

    Article  MathSciNet  Google Scholar 

  24. Nesterov, Y.E., Nemirovskii, A.S.: Interior Point Polynomial Methods in Convex Programming, SIAM Studies in Applied Mathematics, vol. 13. SIAM Publications, Philadelphia (1994)

    MATH  Google Scholar 

  25. Nesterov, Y.E., Todd, M.J.: Self-scaled barriers and interior-point methods for convex programming. Math. Oper. Res. 22(1), 1–42 (1997)

    Article  MathSciNet  Google Scholar 

  26. Nesterov, Y.E., Todd, M.J.: Primal-dual interior-point methods for self-scaled cones. SIAM J. Optim. 8(2), 324–364 (1998)

    Article  MathSciNet  Google Scholar 

  27. Pan, S., Li, X., He, S.: An infeasible primal-dual interior-point algorithm for linear programs based on logarithmic equivalent transformation. J. Math. Anal. Appl. 314(2), 644–660 (2006)

    Article  MathSciNet  Google Scholar 

  28. Pataki, G., Schmieta, S.: The DIMACS library of mixed semidefinite-quadratic-linear programs. http://dimacs.rutgers.edu/Challenges/Seventh/Instances/ (1999). Accessed 5 June 2018

  29. Peng, J., Roos, C., Terlaky, T.: A new and efficient large-update interior-point method for linear optimization. J. Comput. Technol. 6(4), 61–80 (2001)

    MathSciNet  MATH  Google Scholar 

  30. Peng, J., Roos, C., Terlaky, T.: A new class of polynomial primal-dual methods for linear and semidefinite optimization. Eur. J. Oper. Res. 143, 234–256 (2002)

    Article  MathSciNet  Google Scholar 

  31. Peng, J., Roos, C., Terlaky, T.: Primal-dual interior-point methods for second-order conic optimization based on self-regular proximities. SIAM J. Optim. 13(1), 179–203 (2002)

    Article  MathSciNet  Google Scholar 

  32. Peng, J., Roos, C., Terlaky, T.: Self-Regular Functions: A New Paradigm for Primal-Dual Interior-Point Methods. Princeton University Press, Princeton (2002)

    MATH  Google Scholar 

  33. Roos, C.: A full-Newton step O (n) infeasible interior-point algorithm for linear optimization. SIAM J. Optim. 16(4), 1110–1136 (2006)

    Article  MathSciNet  Google Scholar 

  34. Roos, C., Terlaky, T., Vial, J.P.: Theory and Algorithms for Linear Optimization. Springer, New York (2005)

    MATH  Google Scholar 

  35. Salahi, M., Terlaky, T.: Self-regular interior-point methods for semidefinite optimization. In: Anjos, M.F., Lasserre, J.B. (eds.) Handbook on Semidefinite, Conic and Polynomial Optimization, pp. 437–454. Springer, Berlin (2012)

    Chapter  Google Scholar 

  36. Salahi, M., Terlaky, T., Zhang, G.: The complexity of self-regular proximity based infeasible IPMs. Comput. Optim. Appl. 33, 157–185 (2006)

    Article  MathSciNet  Google Scholar 

  37. Schmieta, S., Alizadeh, F.: Extension of primal-dual interior point algorithms to symmetric cones. Math. Program. Ser. A 96(3), 409–438 (2003)

    Article  MathSciNet  Google Scholar 

  38. Sonnevend, G.: An “analytic center” for polyhedrons and new classes of global algorithms for linear (smooth, convex) programming. In: Prékopa, A., Szelezsán, J., Strazicky, B. (eds.) System Modelling and Optimization: Proceedings of the 12th IFIP-Conference Held in Budapest, Hungary, September 1985, Lecture Notes in Control and Information Sciences, vol. 84, pp. 866–876. Springer, Berlin, West-Germany (1986)

  39. Sturm, J.F.: Similarity and other spectral relations for symmetric cones. Linear Algebra Appl. 312(13), 135–154 (2000)

    Article  MathSciNet  Google Scholar 

  40. Tanabe, K.: Centered Newton method for linear programming: interior and ‘exterior’ point method. In: Tone, K. (ed.) New Methods for Linear Programming, vol. 3, pp. 98–100. The Institute of Statistical Mathematics, Tokyo, Japan (1990) (In Japanese)

  41. Toh, K.C., Todd, M.J., Tutuncu, R.H.: SDPT3—a Matlab software package for semidefinite programming. Optim. Methods Sofw. 11, 545–581 (1999)

    Article  MathSciNet  Google Scholar 

  42. Tutuncu, R.H., Toh, K.C., Todd, M.J.: Solving semidefinite-quadratic-linear programs using SDPT3. Math. Program. Ser. B 95, 189–217 (2003)

    Article  MathSciNet  Google Scholar 

  43. Vieira, M.: Jordan algebraic approach to symmetric optimization. Ph.D. thesis, Electrical Engineering, Mathematics and Computer Science, Delft University of Technology, The Netherlands (2007)

  44. Wang, G.Q., Bai, Y.Q.: A new full Nesterov–Todd step primal-dual path-following interior-point algorithm for symmetric optimization. J. Optim. Theory Appl. 154(3), 966–985 (2012)

    Article  MathSciNet  Google Scholar 

  45. Wright, S.: Primal-Dual Interior-Point Methods. SIAM, Philadelphia (1997)

    Book  Google Scholar 

  46. Ye, Y.: Interior Point Algorithms, Theory and Analysis. Wiley, Chichester (1997)

    Book  Google Scholar 

Download references

Acknowledgements

The authors are thankful to the editor and the anonymous reviewers for the valuable suggestions that improved the presentation of the paper. This work was supported by a Grant of Ministry of Research and Innovation, CNCS - UEFISCDI, Project Number PN-III-P4-ID-PCE-2016-0190, within PNCDI III.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zsolt Darvay.

Appendix

Appendix

We present the theory of the Euclidean Jordan algebras and symmetric cones. A more detailed study about this approach can be found in the book of Faraut and Korányi [9]. Let us consider an n-dimensional vector space V over \(\mathbb {R}\) and the following bilinear map: \(\circ :(x,y) \rightarrow x\circ y\in V\). Then, \((V,\circ )\) is called a Jordan algebra iff for all \(x,y \in V\)

  1. i.

    \(x\circ y=y\circ x\);

  2. ii.

    \(x \circ (x^2 \circ y) = x^2 \circ (x \circ y)\), where \(x^2=x \circ x\).

Suppose that there exists an identity element e such that \(x \circ e = e \circ x = x, \forall x \in V\). A Jordan algebra is called Euclidean if there exists an associative inner product. Let us introduce the notion of degree of an element x, denoted by deg(x). If r is the smallest integer such that the set \(\{e,x, \ldots , x^r\}\) is linearly dependent, for all \(x \in V\), then r is called the degree of x. The rank of V is the largest deg(x), for all \(x \in V\) and it is denoted as rank(V). We assume that V is a Euclidean Jordan algebra with rank r and we will denote it by V. Let \(L(x):V \rightarrow V\) be a linear operator such that for every \(y \in V\), \(x \circ y = L(x) y\).

We define the quadratic representation of V for each \(x \in V\) : \(P(x):=2L(x)^2-L(x^2),\) where \(L(x)^2:=L(x)L(x)\).

An element \(c \in V\) is named indempotent iff \(c \ne 0\) and \(c^2=c\). An idempotent element c is called primitive iff \(c \ne 0\) and \(\not \exists \; c_1 \ne 0\) and \(c_2 \ne 0\) such that \(c = c_1 + c_2\). Two idempotent elements \(c_1\) and \(c_2\) are orthogonal if \(c_1 \circ c_2 = 0.\) A set of primitive idempotents \(\{ c_1,\ldots , c_r\}\) is named Jordan frame iff for all primitive idempotents \(c_i\) the following hold: \(c_i \circ c_j = 0, i\ne j\) and \(\sum _{i=1}^{r}c_i=e\). The following theorem plays an important role in the theory of the Euclidean Jordan algebras.

Theorem 3

(Spectral decomposition, Theorem III.1.2, Faraut and Korányi [9]) Let \(x \in V\). Then there exists a Jordan frame \(\{c_1,\ldots ,c_r\}\) and the real numbers \(\lambda _1(x), \ldots , \lambda _r(x)\) such that

$$\begin{aligned} x=\sum _{i=1}^{r}\lambda _i(x)c_i. \end{aligned}$$
(48)

The numbers \(\lambda _i(x)\) are the eigenvalues of x.

Theorem 3 enables us to extend the definition of any real-valued univariate continuous function to elements of Euclidean Jordan algebras using eigenvalues. Let us introduce the vector-valued function using the function \(\varphi \), which is a real-valued univariate function defined on the interval \((\kappa ^2,+\infty )\) and differentiable on the interval \((\kappa ^2,+\infty )\) such that \(\varphi '(t)>0, \forall t>\kappa ^2\). Let \(x = \sum _{i=1}^{r}\lambda _i(x)c_i\), where \(\{c_1,\ldots ,c_r\}\) is the corresponding Jordan frame. The vector-valued function \(\varphi \) is defined in the following way: \(\varphi (x):=\varphi (\lambda _1(x))c_1+\ldots +\varphi (\lambda _r(x))c_r\). Similarly, the vector-valued function \(\varphi '\) can be given in the following way: \(\varphi '(x):=\varphi '(\lambda _1(x))c_1+\ldots +\varphi '(\lambda _r(x))c_r.\)

Consider the following notions: the trace, \(tr(x):=\sum _{i=1}^{r}\lambda _i(x)\), the square, \(x^2:=\sum _{i=1}^{r} \lambda _i(x)^2 c_i\), the square root, \(x^{\frac{1}{2}}:=\sum _{i=1}^{r}\sqrt{\lambda _i(x)}c_i\), wherever all \(\lambda _i(x) \ge 0\) and the inverse, \(x^{-1}:=\sum _{i=1}^{r} \lambda _i(x)^{-1}c_i\), wherever all \(\lambda _i \ne 0\). We say that x is invertible, if \(x^{-1}\) is defined. Let us consider the trace inner product \(\langle x,s \rangle :=tr(x \circ s),\) where \(x,s \in V.\) The Frobenius norm, \(\Vert \cdot \Vert _F\), is defined by \(\Vert x\Vert _F:=\sqrt{\langle x,x \rangle }.\) Then, the following hold: \(\Vert x\Vert _F=\sqrt{tr(x^2)}=\sqrt{\sum _{i=1}^{r}\lambda _i^2(x)}.\) Furthermore, let us denote the largest and the smallest eigenvalue of x by \(\lambda _{max}(x)\) and \(\lambda _{min}(x)\). Then,

$$\begin{aligned} |\lambda _{max}(x)|\le \Vert x\Vert _F \quad \text {and} \quad | \lambda _{min}(x)| \le \Vert x\Vert _F. \end{aligned}$$
(49)

Let us introduce the cone of squares \(K =\{x^2:x \in V\}\). The following hold: \(x \in K \Leftrightarrow \lambda _i(x) \ge 0, i=1,\ldots ,r\) and \(x \in \text{ int }\, K \Leftrightarrow \lambda _i(x) > 0, i=1,\ldots ,r.\)

We use the following lemmas in the analysis of the algorithm.

Lemma 12

(Lemma 2.6, Darvay and Takács [6]) Let us consider \(x \in V\) and \(\Vert x\Vert _F < 1.\) Then,

$$\begin{aligned} e-x \in \text{ int }\, K. \end{aligned}$$

Lemma 13

(Lemma 14, Schmieta and Alizadeh [37]) Let \(x,s \in V.\) One has

$$\begin{aligned} \lambda _{min}(x+s)\ge \lambda _{min}(x)+\lambda _{min}(s)\ge \lambda _{min}(x)-\Vert s\Vert _F \end{aligned}$$

and

$$\begin{aligned} \lambda _{max}(x+s)\le \lambda _{max}(x)+\lambda _{max}(s)\le \lambda _{max}(x)+\Vert s\Vert _F \end{aligned}$$

Lemma 14

(Lemma 30, Schmieta and Alizadeh [37]) Let \(x,s \in K.\) Then,

$$\begin{aligned} \left\| P(x)^{\frac{1}{2}}s-e\right\| _F \le \Vert x \circ s-e\Vert _F. \end{aligned}$$

Lemma 15

(Theorem 4, Sturm [39]) Let \(x,s \in K.\) One has

$$\begin{aligned} \lambda _{min}\left( P(x)^{\frac{1}{2}}s \right) \ge \lambda _{min}(x \circ s). \end{aligned}$$

The following lemma presents the scaling introduced by Nesterov and Todd. This method was first studied for SO by Faybusovich.

Lemma 16

(NT-scaling, Lemma 3.2, Faybusovich [11]) Let \(x,s \in \text{ int }\, K.\) Then, there exists a unique \(w \in \text{ int }\, K\) such that

$$\begin{aligned} x=P(w)s. \end{aligned}$$

Moreover,

$$\begin{aligned} w=P(x)^{\frac{1}{2}}\left( P(x)^{\frac{1}{2}}s\right) ^{-\frac{1}{2}}\left[ =P(s)^{-\frac{1}{2}}\left( P(s)^{\frac{1}{2}}x \right) ^{\frac{1}{2}}\right] . \end{aligned}$$

The point w is called the scaling point of x and s.

Then, there exists \({{\bar{v}}} \in \text{ int }\, K\) such that

$$\begin{aligned} {{\bar{v}}}=P(w)^{-\frac{1}{2}}x=P(w)^{\frac{1}{2}}s. \end{aligned}$$
(50)

Note that \(P(w)^{\frac{1}{2}}\) and it’s inverse \(P(w)^{-\frac{1}{2}}\) are automorphisms of \(\text{ int }\, K\).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Rigó, P.R., Darvay, Z. Infeasible interior-point method for symmetric optimization using a positive-asymptotic barrier. Comput Optim Appl 71, 483–508 (2018). https://doi.org/10.1007/s10589-018-0012-4

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10589-018-0012-4

Keywords

Navigation