Skip to main content
Log in

On solving parametric multiobjective quadratic programs with parameters in general locations

  • Original Research
  • Published:
Annals of Operations Research Aims and scope Submit manuscript

Abstract

While theoretical studies on parametric multiobjective programs (mpMOPs) have been steadily progressing, the algorithmic development has been comparatively limited despite the fact that parametric optimization can provide a complete parametric description of the efficient set. This paper puts forward the premise that parametrization of the efficient set of nonparametric MOPs can be combined with solving parametric MOPs because the algorithms performing the former can also be used to achieve the latter.This strategy is realized through (i) development of a generalized scalarization, (ii) a computational study of selected parametric optimization algorithms, and (iii) applications in a real-life context. Several variants of a generalized weighted-sum scalarization allow one to scalarize mpMOPs to match the capabilities of algorithms. Parametric multiobjective quadratic programs are scalarized into parametric quadratic programs (mpQPs) with linear and/or quadratic constraints. In the computational study, three algorithms capable of solving mpQPs are examined on synthetic instances and two of the algorithms are applied to decision-making problems in statistics and portfolio optimization. The real-life context reveals the interplay between the scalarizations and provides additional insight into the obtained parametric solution sets.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Availability of data and material

Not applicable.

Code availability

Code was devised in MATLAB, available on request.

Notes

  1. The reader is referred to Appendix 1 for the results quoted from other sources.

References

  • Adelgren, N. (2021). https://github.com/Nadelgren/mpLCP_solver (2019). Accessed March 11.

  • Adelgren, N. (2020). Advancing parametric optimization: Theory and solution methodology for multiparametric linear complementarity problems with parameters in general locations. SpringerBriefs on Optimization Series.

  • Bednarczuk, E. (2004). Continuity of minimal points with applications to parametric multiple objective optimization. European Journal of Operational Research, 157, 59–67.

    Article  Google Scholar 

  • Bemporad, A., & Filippi, C. (2006). An algorithm for approximate multiparametric convex programming. Computational Optimization and Applications, 35(1), 87–108.

    Article  Google Scholar 

  • Benson, H. (1985). Multiple-objective linear programming with parametric criteria coefficients. Management Science, 31, 461–474.

    Article  Google Scholar 

  • Bitran, G. (1980). Linear multi-objective programs with interval coefficients. Management Science, 26, 694–706.

    Article  Google Scholar 

  • Bonnel, H., & Schneider, C. (2019). Post-Pareto analysis and a new algorithm for the optimal parameter tuning of the elastic net. Journal of Optimization Theory and Applications, 183, 993–1027.

    Article  Google Scholar 

  • Chankong, V., & Haimes, Y. (1983). Multiobjective decision making: Theory and methodology. North-Holland series in system science and engineering. North Holland.

  • Cottle, R. (1990). The principal pivoting method revisited. Mathematical Programming, 48, 369–385. https://doi.org/10.1007/BF01582264

    Article  Google Scholar 

  • Cottle, R., & Guu, S. (1992). Two characterizations of sufficient matrices. Linear Algebra and its Applications, 170, 65–74. https://doi.org/10.1016/0024-3795(92)90410-C

    Article  Google Scholar 

  • Cottle, R., Pang, J., & Stone, R. (2009). The linear complementarity problem, 1st edn. Society for Industrial and Applied Mathematics. https://doi.org/10.1137/1.9780898719000.

  • Cottle, R., Pang, J., & Venkateswaran, V. (1989). Sufficient matrices and the linear complementarity problem. Linear Algebra and its Applications, 114–115, 231–249. https://doi.org/10.1016/0024-3795(89)90463-1

    Article  Google Scholar 

  • Dellnitz, M., & Witting, K. (2009). Computation of robust Pareto points. International Journal of Computing Science and Mathematics, 2(3), 243–266.

    Article  Google Scholar 

  • Den Hertog, D., Roos, C., & Terlaky, T. (1992). The linear complementary problem, sufficient matrices and the criss-cross method. Combinatorial optimization. NATO ASI Series (Series F: Computer and Systems Sciences), 82, 253–257. https://doi.org/10.1007/978-3-642-77489-8_18

    Article  Google Scholar 

  • Diamond S., Agrawal, A., & Murray, R. (2021). CVXPY. https://www.cvxpy.org/examples/basic/quadratic_program.html (2020). Accessed March 3.

  • Ehrgott, M. (2005). Multicriteria optimization. New York: Springer.

    Google Scholar 

  • Ehrgott, M., Greco, S., & Figueira, J. (2016). Multiple criteria decision analysis: State of the art surveys, 2nd edn International Series in Operations Research and Management Science (2nd ed.). New York: Springer.

    Google Scholar 

  • El-Banna, A. (1993). A study on parametric multiobjective programming problems without differentiability. Computers & Mathematics with Applications, 26(12), 87–92.

    Article  Google Scholar 

  • Enkhbat, R., Guddat, J., & Chinchuluun, A. (2008). Parametric multiobjective optimization. In A. Chinchuluun, P. Pardalos, A. Migdalas, & L. Pitsoulis (Eds.), Pareto optimality, game theory and equilibria, Springer optimization and its applications (Vol. 17, pp. 529–538). New York: Springer.

    Google Scholar 

  • Fang, Y., & Yang, X. (2010). Smooth representations of optimal solution sets of piecewise linear parametric multiobjective programs. In: Variational analysis and generalized differentiation in optimization and control, Springer Optimization and Its Applications (Vol. 47, pp. 163–176). New York: Springer.

  • Galvan, E., Malak, R., Hartl, D., & Baur, J. (2018). Performance assessment of a multi-objective parametric optimization algorithm with application to a multi-physical engineering system. Structural and Multidisciplinary Optimization, 58, 489–509.

    Article  Google Scholar 

  • Geoffrion, A. (1968). Proper efficiency and the theory of vector maximization. Journal of Mathematical Analysis and Applications, 22(3), 618–630. https://doi.org/10.1016/0022-247X(68)90201-1

    Article  Google Scholar 

  • Guddat, J., Vasquez, F., Tammer, K., & Wendler, K. (1985). Multiobjective and stochastic optimization based on parametric optimization, mathematical research (Vol. 26). Berlin: Akademie-Verlag.

    Google Scholar 

  • Haimes, Y., Lasdon, L., & Wismer, D. (1971). On a bicriterion formulation of the problems of integrated system identification and system optimization. IEEE Transactions on Systems, Man, and Cybernetics, SMC-1(3), 296–297. https://doi.org/10.1109/TSMC.1971.4308298.

  • Hartl, D., Galvan, E., Malak, R., & Baur, J. (2016). Parameterized design optimization of a magnetohydrodynamic liquid metal active cooling concept. J. Mech. Des., 138, 031402.

    Article  Google Scholar 

  • Herceg, M., Kvasnica, M., Jones, C., & Morari, M. (2013). Multi-parametric toolbox 3.0. In 2013 European control conference (ECC) (pp. 502–510). IEEE.

  • Hirschberger, M., Qi, Y., & Steuer, R. (2010). Large-scale MV efficient frontier computation via a procedure of parametric quadratic programming. European Journal of Operational Research, 204(3), 581–588.

    Article  Google Scholar 

  • Hirschberger, M., Steuer, R., Utz, S., Wimmer, W., & Qi, Y. (2013). Computing the nondominated surface in tri-criterion portfolio selection. Operational Research, 61, 169–183.

    Article  Google Scholar 

  • Huy, N., Mordukhovich, B., & Yao, J. (2008). Coderivatives of frontier and solution maps in parametric multiobjective optimization. Taiwanese Journal of Mathematics, 12(8), 2083–2111.

    Article  Google Scholar 

  • Jayasekara, P., Adelgren, N., & Wiecek, M. (2019). On convex multiobjective programs with application to portfolio optimization. Journal of Multi-Criteria Decision Analysis, 27(3–4), 189–202.

    Google Scholar 

  • Johansen, T. (2002). On multi-parametric nonlinear programming and explicit nonlinear model predictive control. In Proceedings of the 41st IEEE conference on decision and control (Vol. 3, pp. 2768–2773). IEEE.

  • Klafszky, E., & Terlaky, T. (1992). Some generalizations of the criss-cross method for quadratic programming. Optimization, 24(1–2), 127–139. https://doi.org/10.1080/02331939208843783

    Article  Google Scholar 

  • Leverenz, J. (2016). Network target coordination for multiparametric programming. PhD dissertation, Clemson University

  • Leverenz, J., Xu, M., & Wiecek, M. (2016). Multiparametric optimization for multidisciplinary engineering design. Structural and Multidisciplinary Optimization, 54(4), 1–16.

    Article  Google Scholar 

  • Li, D., Yang, J., & Biswal, M. (1999). Quantitative parametric connections between methods for generating noninferior solutions in multiobjective optimization. European Journal of Operational Research, 117, 84–99.

    Article  Google Scholar 

  • Lucchetti, R., & Miglierina, E. (2004). Stability for convex vector optimization problems. Optimization, 53(5–6), 517–528.

    Article  Google Scholar 

  • Naccache, P. (1979). Stability in multicriteria optimization. Journal of Mathematical Analysis and Applications, 68(2), 441–453.

    Article  Google Scholar 

  • Oberdiecka, R., & Pistikopoulos, N. (2016). Multiobjective optimization with convex quadratic cost functions: A multiparametric programming approach. Computers and Chemical Engineering, 85, 36–39.

    Article  Google Scholar 

  • Pappas, I., & Diangelakis, N., & Pistikopoulos, E. (2020). The exact solution of multiparametric quadratically constrained quadratic programming problems. Journal of Global Optimization, 79, 59–85.

  • Penot, J., & Sterna-Karwat, A. (1986). Parametrized multicriteria optimization: Continuity and closedness of optimal multifunctions. Journal of Mathematical Analysis and Applications, 120(1), 150–168.

    Article  Google Scholar 

  • Penot, J., & Sterna-Karwat, A. (1989). Parametrized multicriteria optimization: Order continuity of the marginal multifunctions. Journal of Mathematical Analysis and Applications, 144(1), 1–15.

    Article  Google Scholar 

  • Pistikopoulos, E., Diangelakis, N., & Oberdieck, R. (2021). Multi-parametric optimization and control. Operations research and management science. New York: Wiley.

    Google Scholar 

  • Romanko, O., Ghaffari-Hadigheh, A., & Terlaky, T. (2012). Multiobjective optimization via parametric optimization: Models, algorithms, and applications. In T. Terlaky & F. Curtis (Eds.), Modeling and Optimization: Theory and Applications (pp. 77–119). New York: Springer.

    Chapter  Google Scholar 

  • Ruetsch, G. (2010). Using interval techniques to solve a parametric multi-objective optimization problem. United States Patent No. 7.664,622 B2 https://patents.google.com/patent/US7664622B2/en.

  • Sawaragi, Y., Nakayama, H., & Tanino, T. (1985). Theory of Multiobjective Optimization. Cambridge: Academic Press.

    Google Scholar 

  • Steuer, R., Qi, Y., & Hirschberger, M. (2011). Comparative issues in large-scale mean-variance efficient frontier computation. Decision Support Systems, 51, 250–255.

    Article  Google Scholar 

  • Thuan, L., & Luc, D. (2000). On sensitivity in linear multiobjective programming. Journal of Optimization Theory and Applications, 107(3), 615–626.

    Article  Google Scholar 

  • Väliaho, H. (1994). A procedure for the one-parametric linear complementarity problem. Optimization, 29(3), 235–256. https://doi.org/10.1080/02331939408843953

    Article  Google Scholar 

  • Wiecek, M., & Dranichak, G. (2016). Robust multiobjective optimization for decision making under uncertainty and conflict. In: J. Smith (ed.) Optimization Challenges in Complex, Networked, and Risky Systems, Tutorials in Operations Research (pp. 84–114). INFORMS.

  • Wiecek, M., Ehrgott, M., & Engau, A. (2016). Continuous multiobjective programming. In S. Greco, M. Ehrgott, & J. Figueira (Eds.), Multiple criteria decision analysis: State of the art surveys (pp. 738–815). New York: Springer.

    Google Scholar 

  • Witting, K., Ober-Blöbaum, S., & Dellnitz, M. (2013). A variational approach to define robustness for parametric multiobjective optimization problems. Journal of Global Optimization, 57(2), 331–345.

    Article  Google Scholar 

  • Witting, K., Schulz, B., Dellnitz, M., Böcker, J., & Fröhleke, N. (2008). A new approach for online multiobjective optimization of mechatronic systems. International Journal of Software Tools and Technology Transfer, 10, 223–231.

    Article  Google Scholar 

  • Wittmann-Hohlbein, M., & Pistikopoulos, E. (2013). On the global solution of multi-parametric mixed integer linear programming problems. Journal of Global Optimization, 57(1), 51–73.

    Article  Google Scholar 

  • Zhou, H., & Hastie, T. (2005). Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society B, 67(2), 301–320.

    Article  Google Scholar 

Download references

Funding

The authors gratefully acknowledge support by the United States Office of Naval Research through grant number N00014-16-1-2725.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Andrew C. Pangia.

Ethics declarations

Conflicts of interest

The authors have no relevant financial or non-financial interests to disclose.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

A. Appendix

A. Appendix

The Appendix contains the following parts: a collection of auxiliary results that are referred to in this paper; a description of the spLCP method and its application to solving the BOQP in Example 15; the complete parametric solutions to the Elastic Net Problem in Example (21) with parameters \(\pmb {\lambda }\) and \((\alpha , \beta )\); and the complete parametric solutions to the Portfolio Problem in Example (27).

1.1 A.1. Auxiliary results

Using the notation and context of this paper, we present several results referenced from other works as well as the proof of Proposition 3 found in Sect. 3.

Corollary 3

(Corollary 7 in Jayasekara et al. (2019)) Let (MOP) be strictly convex and \(J \subset \{1,\ldots ,r\}.\) A solution \(\hat{{\textbf {x}}}\in \mathcal {X}\) is efficient to (MOP) if and only if \(\hat{\textbf {x}}= \hat{\textbf {x}}(\pmb {\lambda })\) is efficient to \(\min _{{\textbf {x}}\in \mathcal {X}} \;\left( \sum _{j\in J} \lambda _j g_j({\textbf {x}}), g_{j_1}({\textbf {x}}), \ldots , g_{j_{r-p}}({\textbf {x}}) \right) \) for some \(\pmb {\lambda } \in \varLambda \subseteq \mathbb {R}^{p}\).

Theorem 4

(Theorem 4.1 in Chankong and Haimes (1983)) Let \(\pmb {\epsilon } \in \mathbb {R}^r.\) A solution \(\hat{{\textbf {x}}}\in \mathcal {X}\) is efficient to (MOP) if and only if \(\hat{{\textbf {x}}} = \hat{\textbf {x}}(\pmb {\epsilon }) \) is an optimal solution to (5) with \(g_j(\hat{\textbf {x}}) = \epsilon _j\) for every \(j = 1, \ldots , r, j \ne i\), and for every \(i = 1, \ldots , r.\)

Corollary 4

(Corollary 8.1 in Jayasekara et al. (2019)) Let () be strictly convex, \(\pmb \lambda \in \varLambda \), \(\pmb {\epsilon }\in \mathcal {E}\), and \(\hat{{\textbf {x}}}=\hat{{\textbf {x}}}(\pmb \lambda ,\pmb {\epsilon })\in \mathcal {X}\) be an optimal solution to (7). If the second-order sufficient conditions for optimality hold at \(\hat{{\textbf {x}}}\) then \(\hat{{\textbf {x}}}\) is an efficient solution to ().

Proof of Proposition 3

Proof

Let \(\hat{{\textbf {x}}}=\hat{{\textbf {x}}}(\pmb {\lambda }, \pmb {\epsilon })\) be an optimal solution to (9) for some \(i \notin J\). Therefore, \(\hat{{\textbf {x}}}\) is feasible to (9), that is,

$$\begin{aligned} \begin{array}{cccc} \sum _{j \in J}\lambda _j g_j(\hat{{\textbf {x}}}) \le \epsilon \end{array} \end{aligned}$$
(34)
$$\begin{aligned} \begin{array}{cccc}&g_j(\hat{{\textbf {x}}}) \le \epsilon _j&j\notin J, \; j \ne i. \end{array} \end{aligned}$$
(35)

Assume \(\hat{{\textbf {x}}} \notin \mathcal {X}_{wE}\). Then there exists a point \(\bar{{\textbf {x}}} \in {\mathcal {X}}\) such that

$$\begin{aligned} \begin{array}{ccc} g_j(\bar{{\textbf {x}}}) < g_j(\hat{{\textbf {x}}})&\text { for all } \;\;j = 1, \ldots , r. \end{array} \end{aligned}$$
(36)

Applying \(\lambda _j \ge 0\) not all 0, we have \(\lambda _j g_j(\bar{{\textbf {x}}}) \le \lambda _j g_j(\hat{{\textbf {x}}}), \;\;j \in J\) with at least one strict inequality; therefore \(\sum _{j\in J}\lambda _j g_j(\bar{{\textbf {x}}}) < \sum _{j\in J} \lambda _j g_j(\hat{{\textbf {x}}})\). Using (34), we obtain

$$\begin{aligned} \sum _{j\in J}\lambda _j g_j(\bar{{\textbf {x}}}) < \epsilon , \end{aligned}$$
(37)

while from (35) and (36),

$$\begin{aligned} \begin{array}{cccc}&g_j(\bar{{\textbf {x}}}) < \epsilon _j&j\notin J, \; j \ne i. \end{array} \end{aligned}$$
(38)

Since \(\bar{{\textbf {x}}} \in \mathcal {X}\), (37) and (38) make \(\bar{{\textbf {x}}}\) feasible to (9). From (36), \(g_i(\bar{{\textbf {x}}}) < g_i(\hat{{\textbf {x}}})\), which contradicts the optimality of \(\hat{{\textbf {x}}}\). Therefore, \(\hat{{\textbf {x}}}\) is weakly efficient to (MOP). \(\square \)

1.2 A.2. The spLCP method

We present the reader with the details of the spLCP method summarized in Sect. 4.2.

1.2.1 Reformulation of spLCP

Given (14) and the resulting spLCP with \(\varLambda ' = [0,1]\), the matrix \(M(\lambda )\) and vector \({{\,\mathrm{{\textbf {q}}}\,}}(\lambda )\) are introduced:

$$\begin{aligned} M(\lambda )=M+\lambda \varDelta M, \qquad {{\,\mathrm{{\textbf {q}}}\,}}(\lambda )={{\,\mathrm{{\textbf {q}}}\,}}+\lambda \varDelta {{\,\mathrm{{\textbf {q}}}\,}}, \end{aligned}$$

where \(M=\begin{bmatrix} Q_2 &{} \quad A^T\\ -A &{} \quad 0 \end{bmatrix} \in \mathbb {R}^{h \times h} , \varDelta M=\begin{bmatrix} Q_1-Q_2 &{} \quad {0}\\ 0 &{} \quad 0 \end{bmatrix} \in \mathbb {R}^{h \times h}\), \({{\,\mathrm{{\textbf {q}}}\,}}=\begin{bmatrix} {\textbf {p}}_2\\ {\textbf {b}}\end{bmatrix}\in \mathbb {R}^h\), and \(\varDelta {{\,\mathrm{{\textbf {q}}}\,}}=\begin{bmatrix} {\textbf {p}}_1-{\textbf {p}}_2\\ {\textbf {0}} \end{bmatrix} \in \mathbb {R}^h\). The spLCP can be written as:

$$\begin{aligned} \begin{aligned} {{\,\mathrm{{\textbf {w}}}\,}}-M(\lambda ){\textbf {z}}&= {{\,\mathrm{{\textbf {q}}}\,}}(\lambda ) \\ {{\,\mathrm{{\textbf {w}}}\,}}^{T}{\textbf {z}}&=0\\ {{\,\mathrm{{\textbf {w}}}\,}},{\textbf {z}}&\geqq {\textbf {0}}. \end{aligned} \end{aligned}$$
(39)

and is referred to as the spLCP(\(M(\lambda ),{{\,\mathrm{{\textbf {q}}}\,}}(\lambda )\)) or simply the spLCP if the context is known.

Based on Assumption 2.1, rank\((\varDelta M) = n\). This assumption also assures that the Cholesky decomposition can be applied to matrix \(\varDelta M\), which produces auxiliary matrices used in the spLCP method.

1.2.2 A.2.1 Full rank factorization of matrix \(\varDelta M\)

The method is equipped with matrix factorization needed to modify spLCP (39) into an augmented spLCP. Consider a full rank matrix factorization of matrix \(\varDelta M\)

$$\begin{aligned} \varDelta M= \varPsi \varPsi ^T, \end{aligned}$$

where \(\varPsi \in \mathbb {R}^{h \times n}\) is a lower triangular matrix with positive diagonal elements. Given \(\varDelta M= \begin{bmatrix} Q_1-Q_2 &{} \quad 0\\ 0 &{} \quad 0 \end{bmatrix}\), matrix \(\varPsi \) is composed of two matrices such that \( \varPsi = \left[ \begin{array}{c} \varPsi _1 \\ 0 \end{array} \right] , \) where \(\varPsi _1 \in \mathbb {R}^{n \times n}\) is a lower triangular matrix and 0 is a zero matrix of size \( {m \times n}\) . Then

$$\begin{aligned} \begin{bmatrix} Q_1-Q_2 &{} \quad 0\\ 0 &{}\quad 0 \end{bmatrix}= \varDelta M = \varPsi \varPsi ^T= \left[ \begin{array}{c} \varPsi _1 \\ 0 \end{array} \right] \left[ \begin{array}{cc} \varPsi ^T_1&\quad 0^T \end{array} \right] = \left[ \begin{array}{cc} \varPsi _1 \varPsi ^T_1 &{} \quad 0\\ 0 &{} \quad 0 \end{array} \right] . \end{aligned}$$

Note that only the matrix \(Q_1-Q_2\) in the block (1, 1) of \(\varDelta M\) is decomposed, i.e., \(Q_1-Q_2=\varPsi _1 \varPsi ^T_1 \), and the Cholesky decomposition is used to obtain \(\varPsi _1\).

In the spLCP method, the parameter \(\lambda \) is updated in every iteration. In this update, further discussed in Sect. 3, a vector \( {\textbf {t}} \in R^n\), which is computed below, is used. Let

$$\begin{aligned} \begin{array}{cc} \varDelta {{\,\mathrm{{\textbf {q}}}\,}}&=\varPsi {\textbf {t}}, \end{array} \end{aligned}$$
(40)

or equivalently

$$\begin{aligned} \begin{array}{cc} \begin{bmatrix} {\textbf {p}}_1-{\textbf {p}}_2\\ {\textbf {0}} \end{bmatrix} &{}=\begin{bmatrix} \varPsi _1\\ 0 \end{bmatrix}{} {\textbf {t}}, \end{array} \end{aligned}$$

which is simplified to

$$\begin{aligned} \begin{array}{cc} {\textbf {p}}_1-{\textbf {p}}_2&= \varPsi _1 {\textbf {t}}. \end{array} \end{aligned}$$
(41)

System (41) is solved for \({\textbf {t}}\) using the forward substitution method. Having computed \(\varPsi \) and \({\textbf {t}}\), an augmented spLCP is constructed.

1.2.3 A.2.2. Overview and initialization

Let \(\lambda \in [0,1]\) be fixed and B be a basis for the linear system in (39). B is called a complementary basis for (39) if \(|\{i,i+h\}\cap B|=1\) for \(i = 1,\ldots ,h.\)

Given a complementary basis B, the vector \({\textbf {v}}( \lambda ) = [{{\,\mathrm{{\textbf {w}}}\,}}( \lambda ), {\textbf {z}}( \lambda )] = [{\textbf {v}}_B( \lambda ), {\textbf {v}}_N( \lambda )] \in \mathbb {R}^{2h}\) consists of a subvector of basic variables, \({\textbf {v}}_B( \lambda ) = \{v_i( \lambda ): i \in B\}\), and a subvector of nonbasic variables, \({\textbf {v}}_N( \lambda ) = \{v_i( \lambda ): i \in N\}\), where \(|B|= |N| = h\), and such that one variable in the pair \((w_i( \lambda ), z_i( \lambda ))\) is basic for each \(i = 1,\ldots ,h\). A complementary basis is feasible for (39) if the associated \({\textbf {v}}_B( \lambda ) \geqq \pmb {0}\).

Given \(\varPsi \) and \({\textbf {t}}\) from the factorization and defining a vector of artificial variables, \({\textbf {v}}_{{\textbf {t}}} \in \mathbb {R}^n\), the augmented spLCP is formulated:

$$\begin{aligned} \begin{array}{ccc} \left( \begin{bmatrix} \mathscr {I}_1 &{} \quad 0\\ 0 &{} \quad \mathscr {I}_2 \end{bmatrix}-\begin{bmatrix} M( \lambda ) &{} \quad -\varPsi \\ \varPsi ^T &{} \quad 0 \end{bmatrix}\right) \begin{bmatrix} {\textbf {v}}_B \\ {\textbf {v}}_{{\textbf {t}}} \end{bmatrix}&{}=\begin{bmatrix} {{\,\mathrm{{\textbf {q}}}\,}}(\lambda )\\ {\textbf {t}} \end{bmatrix}\\ {\textbf {v}}_B &{}\quad \geqq {\textbf {0}}\\ {\textbf {v}}_{{\textbf {t}}} &{}\quad \text {free}, \end{array} \end{aligned}$$
(42)

where \(\mathscr {I}_1\) is an \(h \times h\) identity matrix and \(\mathscr {I}_2\) is an \(n \times n\) identity matrix.

The overall goal is to solve (42) for \({\textbf {v}}_{B} = {\textbf {v}}_{B}( \lambda )\). At the initialization, \({\textbf {v}}_B = {{\,\mathrm{{\textbf {w}}}\,}}\) and \({\textbf {v}}_N = {\textbf {z}}\). During the solution process, the initially basic and nonbasic variables change their places, while the artificial variables in \({\textbf {v}}_{{\textbf {t}}}\) are always basic for (42) and considered dummy with no meaning in the solution to (39). Because the variables in \({\textbf {v}}_{{\textbf {t}}}\) are always basic, the basis defined for (39) is also used in the process of solving (42) and the initial basis, \(B_0 = \{1, \ldots , h\}\), is used. However, since there are now \(h+n\) basic variables, the definition of the complementary basis changes. B is called a complementary basis for (42) if \(|\{i,i+n+h\}\cap B|=1\) for \(i = 1,\ldots ,h.\) A complementary basis is feasible for (42) and denoted as FCB if the associated \({\textbf {v}}_B \geqq \pmb {0}\).

The entire problem (42) is saved in an initial Simplex-type Tableau \(T_{B_0}(\lambda )\) as given in Table 8. Tableau \(T_{B_0}(\lambda )\) has \(h+n\) rows and \(2(h+n)+1\) columns. Defining index sets of the columns of this tableau is useful. Let \(\mathbb {E}= \{1, \ldots , h, h+n+1, \ldots , 2h+n\}\) contain the indices of the columns associated with the variables in \({\textbf {v}}\), that is, \(\mathbb {E}= B \cup N\); \(\mathbb {E}'=\{1,2,\ldots ,2h+n\}\) be the set of indices of \(2h + n\) columns starting from the second column and ending at the \((2h+n)^{\text {th}}\) column; and \(K=\{2h+n+1, \ldots , 2h+2n\}\) be the set of indices of the last n columns of Tableau 8. Note that none of these sets contains the first column, \(\pmb {\mathrm {C}}_{.0}\). Note that to select the pivot element with the Criss-Cross Method in Algorithm 3, the signs of matrices \(M(\lambda )\) and \(\varPsi \) in (42) are changed when these matrices are put into the columns \(\pmb {\mathrm {C}}_{.((h+n)+1)},\ldots \pmb {\mathrm {C}}_{.(2h+2n)}\) in Table 8.

Table 8 Initial Tableau \(T_{B_0}( \lambda )\)

At the beginning of Phase I, the right hand side of the linear system in (42), \({{\,\mathrm{{\textbf {q}}}\,}}(\lambda )\), is stored in the first h rows of column \(\pmb {\mathrm {C}}_{.0}\). The last n rows of \(\pmb {\mathrm {C}}_{.0}\) contain the vector \({\textbf {t}}\) obtained by solving (40). At the end of both phases, the solution vector \({\textbf {v}}_B(\lambda )\) is retrieved from the first h rows of column \(\pmb {\mathrm {C}}_{.0}\). The pivoting operations in the tableau convert a complementary basis into another complementary basis. Two types of pivotal operations are defined but neither of them guarantees a new complementary basis to be feasible.

Definition 7

For index \(i \in \mathbb {E}\), the complementary index of i is defined as

$$\begin{aligned} \bar{i}={\left\{ \begin{array}{ll} i + n + h &{} \text {for} \;\; i \in \{ 1, \ldots , h\} \\ i - n -h &{} \text {for} \;\; i \in \{2h, \ldots , 2h + n\} \end{array}\right. } \end{aligned}$$

Definition 8

  1. 1.

    Replacing a single element of a basis with its complement is called a diagonal pivot.

  2. 2.

    Replacing two elements of a basis with their complements is called an exchange pivot.

Phase I is executed to find an initial CBFS for spLCP (39) for the case when \(\lambda =0\) and \({{\,\mathrm{{\textbf {q}}}\,}}\ngeq {\textbf {0}}\). When \(\lambda =0\) and \({{\,\mathrm{{\textbf {q}}}\,}}\ge {\textbf {0}}\), the method starts with Phase II in which a nonparametric LCP is solved for a specific value of the parameter \(\lambda \) that is subsequently updated. The pseudocode for solving spLCP (39) is given in Algorithm 2. In the initial Tableau \(T_{B_{0}}(\lambda )\), \(\lambda =0\).

Let \(T_{B_0}(\lambda )_{i, 0}\) be the element in the \(i^{\text {th}}\) row and the first column of Tableau \(T_{B_{0}}(\lambda )\). If \(T_{B_0}(\lambda )_{i, 0} < 0\) for at least one \(i\in \{1, \ldots , h\}\), Phase I is executed before Phase II. However, if all those elements are nonnegative, the algorithm begins directly with Phase II.

figure e

1.2.4 Phase I

In Phase I, the Criss-Cross Method (given in Algorithm 3) is used to find an initial CBFS for spLCP (39) for the case when \(\lambda = 0\) and \(q \ngeq \pmb {0}\). The Criss-Cross Method solves the nonparametric LCP obtained from the spLCP with a fixed \(\lambda \) (Den Hertog et al., 1992). Let \(\lambda =\bar{\lambda }\) be fixed, \(B_0\) be an initial complementary basis, \( {\textbf {v}}_{B_0}(\bar{\lambda }) ={{\,\mathrm{{\textbf {w}}}\,}}(\bar{\lambda })\) be a vector of basic variables, and \({\textbf {v}}_N(\bar{\lambda })= {\textbf {z}}(\bar{\lambda })\) be a vector of nonbasic variables. If \({{\,\mathrm{{\textbf {q}}}\,}}(\bar{\lambda }) \ge {\textbf {0}}\), i.e., if \(T_{B_0}(\bar{\lambda })_{i, 0} \ge {0}\) for all \(i=1, \ldots , h\), then the algorithm stops immediately and \(({{\,\mathrm{{\textbf {w}}}\,}}(\bar{\lambda }) ={\textbf {q}}(\bar{\lambda }), {\textbf {z}}(\bar{\lambda }) ={\textbf {0}})\) is the initial CBFS for spLCP (39). Otherwise \({{\,\mathrm{{\textbf {q}}}\,}}(\bar{\lambda }) \ngeq {\textbf {0}}\) and pivoting is used to obtain a complementary basis \(B'\) which is adjacent to the current basis B by replacing at most two elements in B. If the pivot element is negative, the diagonal pivot is executed, and if the pivot element is zero, the exchange pivot is executed. Let \(T_B(\bar{\lambda })_{r,\bar{r}}\) be the element in Tableau \(T_B(\bar{\lambda })\) associated with index \(r \in B\) and its complementary index \(\bar{r} \in N\). The element \(T_B(\bar{\lambda })_{r,\bar{r}}\) becomes the pivot element if the index \(r \in {B}\) is chosen as the smallest index with a negative entry in the first column in the Tableau \(T_B(\bar{\lambda })\), and \(\bar{r} \in N\). If there is no such r, i.e., \(T_{B}(\bar{\lambda })_{i,0} \ge 0\) for all \(i=1, \ldots , h\), the algorithm stops; thus, B is a FCB and a CBFS for spLCP (39) has been found.

figure f

1.2.5 Phase II

In Phase II, the parameter \(\lambda \) and basis B are consecutively updated. Let B be a complementary basis for (39). For a given \(\lambda \), in tableau \(T_{B}(\lambda )\) in Table 8, consider \(h + n + 1\) columns consisting of the first (\(0^{\text {th}}\)) column, h columns associated with the current nonbasic variables, and the last n columns of the tableau in Table 8.

For each \(i=1, \ldots , h\) and the index set K, define the following auxiliary vector consisting of \(6n + 1\) elements

$$\begin{aligned} \mathcal {B}_{i0}=&\bigg (T_{B}(\lambda )_{i, 0} \;\;\; (-1)^1T_{B}(\lambda )_{i, K} \;\;\; T_{B}(\lambda )_{n, 0} \;\;\; (-1)^2T_{B}(\lambda )_{i, K} \;\; T_{B}(\lambda )_{n, K} \;\;\; T_{B}(\lambda )_{n, 0}\\&\ldots \;\;\; (-1)^nT_{B}(\lambda )_{i, K} \;\;\; T_{B}(\lambda )^{n-1}_{n, K} T_{B}(\lambda )_{K, 0} \bigg ),\nonumber \end{aligned}$$
(43)

where the terms in (43) denote the following elements in tableau \(T_{B}(\lambda )\):

  • \(T_{B}(\lambda )_{i, 0}\) - the element in the \(i^{th}\) row and the first (\(0^{\text {th}}\)) column,

  • \(T_{B}(\lambda )_{i, K}\) - the vector of elements in the \(i^{th}\) row and all columns associated with the index set K,

  • \(T_{B}(\lambda )_{n, 0}\) - the vector of elements in the last n rows and the first (\(0^{\text {th}}\)) column,

  • \(T_{B}(\lambda )_{n, K}\) - the matrix of elements in the last n rows and all columns associated with the index set K,

  • \(T_{B}(\lambda )_{K, 0}\) - the vector of elements in all rows associated with the index set K and the first (\(0^{\text {th}}\)) column.

To continue we need the following definition.

Definition 9

Let \(\pmb {\upsilon } \in \mathbb {R}^n\).

  1. 1.

    A nonzero vector \(\pmb {\upsilon }\) is lexicographically positive (negative), \(\pmb {\upsilon } \succ {\textbf {0}}\) (\(\pmb {\upsilon } \prec {\textbf {0}}\)), if its first nonzero component is positive (negative).

  2. 2.

    A vector \(\pmb {\upsilon }\) is lexicographically nonnegative (nonpositive), \(\pmb {\upsilon } \succeq {\textbf {0}}\) (\(\pmb {\upsilon } \preceq {\textbf {0}}\)), if \(\pmb {\upsilon } ={\textbf {0}} \) or \(\pmb {\upsilon } \succ {\textbf {0}}\) ( \(\pmb {\upsilon } ={\textbf {0}} \) or \(\pmb {\upsilon } \prec {\textbf {0}}\)).

Phase II is presented in Algorithm 4 that runs until a BFCS is obtained for \(\lambda =1\). If vector \( \mathcal {B}_{i0} \succeq {\textbf {0}}\) for all \(i= 1, \ldots , h\), the parameter \(\lambda \) is updated using Algorithm 5. Otherwise, the current complementary basis is updated using Algorithm 7.

figure g

1.2.6 A.2.6. Updating parameter \(\lambda \)

Parameter \(\lambda \) is updated in Algorithm 5. Two types of polynomial equations, \(P_{i1}(\tau )=0\), for \(i=1, \ldots , h\), and \(P_2(\tau )=0\), are solved to obtain an increment \(\tau \) of the parameter \(\lambda \), which is at some current value between 0 and 1. Let \(\alpha _i\) denote the smallest positive root of the polynomial equation \(P_{i1}(\tau )=0\) for each \(i=1, \ldots , h\). Then the smallest \(\alpha _i\) value is recorded as \(\rho _1\). Similarly, the smallest positive root of \(P_2(\tau )=0\) is recorded as \(\rho _2\). If the roots of the polynomial equations are not positive, we set \(\rho _1=\infty \) or \(\rho _2=\infty \) accordingly. Then the smallest value among \(1-\lambda ,\rho _1\) and \(\rho _2\) is recorded and denoted as \(\rho \), i.e., \(\rho =\min \{1-\lambda ,\rho _1,\rho _2\}\). Based on the value of \(\rho \), one of three strategies is applied to update \(\lambda \). If \(\rho =1-\lambda \), the current tableau is feasible for the invariancy interval \([\lambda ,1]\). If the condition \(\mathcal {B}_{i0} \succeq {{\textbf {0}}}\) holds for \(T_{B_0}(1)\), the algorithm stops and spLCP (39) has been solved for \(\lambda \in [0,1]\). Otherwise, the basis has to be updated. If \(\rho =\rho _1\), the current tableau is feasible for the invariancy interval \([\lambda ,\lambda +\rho _1]\) and the initial tableau is updated using Algorithm 6. If \(\rho =\rho _2\), set \(\lambda =\lambda +\rho _2\) in the initial Tableau \(T_{B_{0}}(\lambda )\) and the condition \(\mathcal {B}_{i0} \succeq {{\textbf {0}}}\) is subsequently checked.

figure h

1.2.7 A.2.7. Updating the initial tableau when \(\rho =\rho _1\)

Algorithm 6 is designed to provide a new initial tableau when \(\rho =\rho _1\) in Algorithm 5. Let \(\mathscr {I}\) be an \(n \times n\) identity matrix. First, the matrix \(-\rho ^{-1}_1\mathscr {I}\) is added to the matrix \(T_{B_0}(\lambda )_{n, K}\). Then pivotal operations are performed on all variables in the last n rows. In the third step, multiply elements in the last n rows by \(-\rho _1^{-1}\) and then the columns \(j \in K\) by \(\rho _1^{-1}\) of \(T_{B}(\lambda )\). In the last step, the matrix \(-\rho ^{-1}_1\mathscr {I}\) is added to the matrix \(T_{B_0}(\lambda )_{n, K}\). These operations provide an initial tableau for the next iteration with the initial basis \(B_0\). The first column of the starting Tableau \(T_{B_0}(\lambda +\rho _1)\) is independent of all the steps in Algorithm 6.

figure i

1.2.8 A.2.8. Updating basis B

Let \(T_B(\lambda )\) be the current tableau for a complementary basis B and for a parameter value \(\lambda \), as defined in Phase II. If \(\mathcal {B}_{i0} \prec {\textbf {0}}\) for at least one \(i\in \{1, \ldots , h\}\) in Algorithm 4, then the basis B is not feasible for system (42) for the current value of \(\lambda \). Then Algorithm 7 is run and the Criss-Cross Method (Algorithm  3) is invoked to find a FCB for the current value of \(\lambda \).

figure j

1.3 A.3. Solving the BOQP in Example (15) with the spLCP method

Below we present the iterations of the spLCP method when solving the BOQP in (15). We make use of a reduced format of the tableau given in Table 8. The initial Tableau \(T_{B_0}(\lambda )\) and all subsequent tableaux are presented without the columns 1 to h associated with the basic variables and the columns \(h+1\) to \(h+n\) associated with the artificial variables that remain always basic.

Initialization: The full rank factorization of matrix \(\varDelta M\) yields \(\varPsi =\begin{bmatrix} 2 &{} \quad 0 \\ 0 &{} \quad 3 \\ 0 &{} \quad 0 \end{bmatrix}\) and \({\textbf {t}} =\) \(\begin{bmatrix} 5 \\ -2 \end{bmatrix}\). Let \(B_0=\{1,2,3\}\); for the sake of clarity, we write \(B_0=\{w_1,w_2,w_3\}\).

Initial Tableau \(T_{B_0}(\lambda )\)

 

\(\pmb {\mathrm {C}}_{.0}\)

\( \pmb {\mathrm {C}}_{.6}\)

\(\pmb {\mathrm {C}}_{.7}\)

\(\pmb {\mathrm {C}}_{.8}\)

\(\pmb {\mathrm {C}}_{.9}\)

\(\pmb {\mathrm {C}}_{.10}\)

\(w_1\)

-1+10\(\lambda \)

-2-4\(\lambda \)

0

-3

2

0

\(w_2\)

1-6\(\lambda \)

0

-5-9\(\lambda \)

-5

0

3

\(w_3\)

15

3

5

0

0

0

\(v_{t_1}\)

5

-2

0

0

0

0

\(v_{t_2}\)

-2

0

-3

0

0

0

Iteration 1: Set \(\lambda =0\) in the initial tableau and obtain \(T_{B_0}(0)\).

Tableau \(T_{B_0}(0)\)

 

\(\pmb {\mathrm {C}}_{.0}\)

\( \pmb {\mathrm {C}}_{.6}\)

\(\pmb {\mathrm {C}}_{.7}\)

\(\pmb {\mathrm {C}}_{.8}\)

\(\pmb {\mathrm {C}}_{.9}\)

\(\pmb {\mathrm {C}}_{.10}\)

\(w_1\)

-1

0

-3

2

0

\(w_2\)

1

0

-5

-5

0

3

\(w_3\)

15

3

5

0

0

0

\(v_{t_1}\)

5

-2

0

0

0

0

\(v_{t_2}\)

-2

0

-3

0

0

0

Since \((T_{B_0}(0))_{1,0}=-1\), we have that \((T_{B_0}(0))_{.0} \not \ge {\textbf {0}}\). Thus we apply Phase I. Let \(r=\min \{i:(T_{B_0}(0))_{i,0}<0, i=1,2,3\}=1\). Then \((T_{B_0}(0))_{r,\bar{r}}=(T_{B_0}(0))_{1,6}=-2 <0\), and a diagonal pivot is performed.

Tableau \(T_{B_1}(0)\)

 

\(\pmb {\mathrm {C}}_{.0}\)

\( \pmb {\mathrm {C}}_{.1}\)

\(\pmb {\mathrm {C}}_{.7}\)

\(\pmb {\mathrm {C}}_{.8}\)

\(\pmb {\mathrm {C}}_{.9}\)

\(\pmb {\mathrm {C}}_{.10}\)

\(z_1\)

1/2

−1/2

0

3/2

−1

0

\(w_2\)

1

0

−5

−5

0

3

\(w_3\)

27/2

3/2

5

−9/2

3

0

\(v_{t_1}\)

6

−1

0

3

−2

0

\(v_{t_2}\)

−2

0

-3

0

0

0

The obtained complementary basis \(B_1 = \{z_1, w_2, w_3\}\) is feasible for \(\lambda = 0\) since \((T_{B_1}(0))_{i,0} > 0\) for \(i=1,2,3\). Hence Phase I is terminated. A CBFS is obtained for spLCP (16) for \(\lambda = 0\):

$$\begin{aligned} \begin{bmatrix} w_1\\ w_2\\ w_3 \end{bmatrix}=\begin{bmatrix} 0\\ 1\\ 27/2 \end{bmatrix}\qquad \begin{bmatrix} z_1\\ z_2\\ z_3 \end{bmatrix}=\begin{bmatrix} 1/2\\ 0\\ 0 \end{bmatrix}, \end{aligned}$$
(44)

and yields an efficient solution to BOQP (15)

$$\begin{aligned} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix}=\begin{bmatrix} 1/2 \\ 0 \end{bmatrix}. \end{aligned}$$

Phase II is initiated. Since \((T_{B_1}(0))_{i,0} > {0}\), the condition \(\mathcal {B}_{i0} \succ {\textbf {0}}\) holds for all \(i=1,2,3\). Algorithm 5 is used to find an invariancy interval for \(\lambda \), i.e., the increment \(\tau \) above 0, in which (44) remains a CBFS. To do so, the polynomial equations, \(P_{i1}(\tau )=0\) for each \(i=1,2,3\) and \(P_2(\tau )=0\), are solved for their positive roots:

$$\begin{aligned} \begin{aligned} P_{i1}(\tau )&=\det \big (-T_{B}(\lambda )_{KK}+\tau ^{-1}\mathscr {I}\big )\big (T_{B}(\lambda )_{i0} \\&-(-T_{B}(\lambda )_{iK})(T_{B}(\lambda )_{KK}+\tau ^{-1}\mathscr {I})^{-1}T_{B}(\lambda )_{K0}\big )&=0. \end{aligned} \end{aligned}$$
$$\begin{aligned} \begin{aligned} \text {For } i=1,\,\, \det \left( \begin{bmatrix} 2+\tau ^{-1} &{} \quad 0\\ 0 &{} \quad \tau ^{-1} \end{bmatrix}\right) \left( 1/2-\begin{bmatrix} 1&\quad 0 \end{bmatrix} \begin{bmatrix} 2+\tau ^{-1} &{} \quad 0\\ 0 &{} \quad \tau ^{-1} \end{bmatrix}^{-1} \begin{bmatrix} 6 \\ -2 \end{bmatrix} \right)&=0\\ \left( \frac{2\tau +1}{\tau ^2}\right) \left( \frac{1-10\tau }{4\tau +2}\right)&=0\\ \alpha _1=\tau&= 1/10. \end{aligned} \end{aligned}$$
$$\begin{aligned} \begin{aligned} \text {For } i=2,\,\, \det \left( \begin{bmatrix} 2+\tau ^{-1} &{} \quad 0\\ 0 &{} \quad \tau ^{-1} \end{bmatrix}\right) \left( 1-\begin{bmatrix} 0&\quad -3 \end{bmatrix} \begin{bmatrix} 2+\tau ^{-1} &{} \quad 0\\ 0 &{} \quad \tau ^{-1} \end{bmatrix}^{-1} \begin{bmatrix} 6 \\ -2 \end{bmatrix} \right)&=0\\ \left( \frac{2\tau +1}{\tau ^2}\right) \left( 1-6\tau \right)&=0\\ \alpha _2=\tau&= 1/6. \end{aligned} \end{aligned}$$
$$\begin{aligned} \begin{aligned} \text {For } i=3,\,\, \det \left( \begin{bmatrix} 2+\tau ^{-1} &{} \quad 0\\ 0 &{} \quad \tau ^{-1} \end{bmatrix}\right) \left( 27/2-\begin{bmatrix} -3&\quad 0 \end{bmatrix} \begin{bmatrix} 2+\tau ^{-1} &{} \quad 0\\ 0 &{} \quad \tau ^{-1} \end{bmatrix}^{-1} \begin{bmatrix} 6 \\ -2 \end{bmatrix} \right)&=0\\ \left( \frac{2\tau +1}{\tau ^2}\right) \left( 2\tau +3\right)&=0\\ \alpha _3=\tau&= \infty . \end{aligned} \end{aligned}$$

Then \(\rho _1=\min \{\alpha _1,\alpha _2,\alpha _3\}=\min \{1/10,1/6,\infty \}=1/10.\) Equation \(P_2(\tau )=0\) is solved:

$$\begin{aligned} \begin{aligned} P_2(\tau )=\det \big (-T_{B}(\lambda )_{KK}+\tau ^{-1}\mathscr {I}\big )&=0 \end{aligned} \end{aligned}$$
$$\begin{aligned} \begin{aligned} \det \left( \begin{bmatrix} 2+\tau ^{-1} &{} \quad 0\\ 0 &{} \quad \tau ^{-1} \end{bmatrix}\right)&=0\\ \left( \frac{2\tau +1}{\tau ^2}\right)&=0 \end{aligned} \end{aligned}$$

Since the roots of equation \(P_2(\tau )=0\) are not positive, set \(\rho _2=\tau =\infty \). Find \(\rho =\min \{1-\lambda ,\rho _1,\rho _2\}=\min \{1,1/10,\infty \}=\rho _1=1/10\), and the invariancy interval for \(\lambda \) for the current BFCS (44) is [0, 1/10]. The starting tableau for the next iteration is constructed.

Iteration 2: Set \(\lambda =1/10\) in the initial tableau and obtain \(T_{B_0}(1/10)\).

Tableau \(T_{B_0}(1/10)\)

 

\(\pmb {\mathrm {C}}_{.0}\)

\( \pmb {\mathrm {C}}_{.6}\)

\(\pmb {\mathrm {C}}_{.7}\)

\(\pmb {\mathrm {C}}_{.8}\)

\(\pmb {\mathrm {C}}_{.9}\)

\(\pmb {\mathrm {C}}_{.10}\)

\(w_1\)

0

−12/5

0

−3

2

0

\(w_2\)

2/5

0

−59/10

−5

0

3

\(w_3\)

15

3

5

0

0

0

\(v_{t_1}\)

5

−2

0

0

0

0

\(v_{t_2}\)

−2

0

−3

0

0

0

The resulting \((T_{B_0}(1/10))_{i0} \ge 0\) for \(i=1,2,3\), i.e., the current complementary basis \(B_2 = B_0 = \{w_1, w_2, w_3\}\) is feasible for \(\lambda = 1/10\).

A new CBFS is obtained for spLCP (16) for \(\lambda = 1/10\):

$$\begin{aligned} \begin{bmatrix} w_1\\ w_2\\ w_3 \end{bmatrix}=\begin{bmatrix} 0\\ 2/5\\ 15 \end{bmatrix}\qquad \begin{bmatrix} z_1\\ z_2\\ z_3 \end{bmatrix}=\begin{bmatrix} 0\\ 0\\ 0 \end{bmatrix}. \end{aligned}$$
(45)

and yields an efficient solution to BOQP (15)

$$\begin{aligned} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix}=\begin{bmatrix} 0 \\ 0 \end{bmatrix}. \end{aligned}$$

Since \((T_{B_0}(1/10))_{i,0} > 0\), the condition \(\mathcal {B}_{i0}\succ {\textbf {0}}\) holds for all \(i=1,2,3\). Algorithm 5 is used to find an invariancy interval for \(\lambda \), i.e., the increment \(\tau \) above 1/10, in which (45) remains a CBFS. To do so, polynomial equations, \(P_{i1}(\tau )=0\) for each \(i=1,2,3\), and \(P_2(\tau )=0\), are solved for their positive roots:

$$\begin{aligned} \begin{aligned} P_{i1}(\tau )&=\det \big (-T_{B}(\lambda )_{KK}+\tau ^{-1}\mathscr {I}\big )\big (T_{B}(\lambda )_{i0} \\&\quad -(-T_{B}(\lambda )_{iK})(T_{B}(\lambda )_{KK}+\tau ^{-1}\mathscr {I})^{-1}T_{B}(\lambda )_{K0}\big )&=0. \end{aligned} \end{aligned}$$
$$\begin{aligned} \begin{aligned} \text {For } i=1, \qquad \det \left( \begin{bmatrix} \tau ^{-1} &{} \quad 0\\ 0 &{} \quad \tau ^{-1} \end{bmatrix}\right) \left( 0-\begin{bmatrix} -2&\quad 0 \end{bmatrix} \begin{bmatrix} \tau ^{-1} &{} 0\\ 0 &{} \tau ^{-1} \end{bmatrix}^{-1} \begin{bmatrix} 5 \\ -2 \end{bmatrix} \right)&=0\\ \left( \frac{1}{\tau ^2}\right) \left( 10\tau \right)&=0\\ \alpha _1=\tau&= \infty . \end{aligned} \end{aligned}$$
$$\begin{aligned} \begin{aligned} \text {For } i=2, \qquad \det \left( \begin{bmatrix} \tau ^{-1} &{} \quad 0\\ 0 &{} \quad \tau ^{-1} \end{bmatrix}\right) \left( 2/5-\begin{bmatrix} 0&\quad -3 \end{bmatrix} \begin{bmatrix} \tau ^{-1} &{} \quad 0\\ 0 &{} \quad \tau ^{-1} \end{bmatrix}^{-1} \begin{bmatrix} 5 \\ -2 \end{bmatrix} \right)&=0\\ \left( \frac{1}{\tau ^2}\right) \left( 2/5-6\tau \right)&=0\\ \alpha _2=\tau&= 1/15. \end{aligned} \end{aligned}$$
$$\begin{aligned} \begin{aligned} \text {For } i=3, \qquad \det \left( \begin{bmatrix} \tau ^{-1} &{} \quad 0\\ 0 &{} \quad \tau ^{-1} \end{bmatrix}\right) \left( 15-\begin{bmatrix} 0&\quad 0 \end{bmatrix} \begin{bmatrix} \tau ^{-1} &{} \quad 0\\ 0 &{} \quad \tau ^{-1} \end{bmatrix}^{-1} \begin{bmatrix} 5 \\ -2 \end{bmatrix} \right)&=0\\ \left( \frac{15}{\tau ^2}\right)&=0\\ \alpha _3=\tau&= \infty . \end{aligned} \end{aligned}$$

Then \(\rho _1=\min \{\alpha _1, \alpha _2, \alpha _3\} = \min \{\infty , 1/15, \infty \}=1/15\). Equation \(P_2(\tau )=0\) is solved.

$$\begin{aligned} \begin{aligned} P_2(\tau )=\det \big (-T_{B}(\lambda )_{KK}+\tau ^{-1}\mathscr {I}\big )&=0 \end{aligned} \end{aligned}$$
$$\begin{aligned} \begin{aligned} \det \left( \begin{bmatrix} \tau ^{-1} &{} \quad 0\\ 0 &{} \quad \tau ^{-1} \end{bmatrix}\right)&=0\\ \left( \frac{1}{\tau ^2}\right)&=0 \end{aligned} \end{aligned}$$

Set \(\rho _2=\tau =\infty \). Find \(\rho =\min \{1-\lambda ,\rho _1,\rho _2\}=\min \{9/10,1/15,\infty \}=\rho _1=1/15\), and the invariancy interval for \(\lambda \) for the current CBFS (45) is \([1/10,(1/10)+(1/15)]=[1/10,1/6]\). The starting tableau for the next iteration is constructed.

Iteration 3: Set \(\lambda =1/6\) in the initial tableau and obtain \(T_{B_0}(1/6)\).

Tableau \(T_{B_0}(1/6)\)

 

\(\pmb {\mathrm {C}}_{.0}\)

\( \pmb {\mathrm {C}}_{.6}\)

\(\pmb {\mathrm {C}}_{.7}\)

\(\pmb {\mathrm {C}}_{.8}\)

\(\pmb {\mathrm {C}}_{.9}\)

\(\pmb {\mathrm {C}}_{.10}\)

\(w_1\)

4/6

−16/6

0

−3

2

0

\(w_2\)

0

0

−39/6

−5

0

3

\(w_3\)

15

3

5

0

0

0

\(v_{t_1}\)

5

−2

0

0

0

0

\(v_{t_2}\)

−2

0

−3

0

0

0

The resulting \((T_{B_0}(1/6))_{i0} \ge 0\) for \(i=1,2,3\), i.e., the current complementary basis \(B_3 = B_0 = \{w_1, w_2, w_3\}\) is feasible for \(\lambda = 1/6\). A CBFS is obtained for spLCP (16) for \(\lambda = 1/6\):

$$\begin{aligned} \begin{bmatrix} w_1\\ w_2\\ w_3 \end{bmatrix}=\begin{bmatrix} 4/6\\ 0\\ 15 \end{bmatrix}\qquad \begin{bmatrix} z_1\\ z_2\\ z_3 \end{bmatrix}=\begin{bmatrix} 0\\ 0\\ 0 \end{bmatrix}, \end{aligned}$$
(46)

and yields an efficient solution to BOQP (15)

$$\begin{aligned} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix}=\begin{bmatrix} 0 \\ 0 \end{bmatrix}. \end{aligned}$$

Since \((T_{B_0}(1/6))_{i,0} > 0\), the condition \(\mathcal {B}_{i0}\succ {\textbf {0}}\) holds for all \(i=1,2,3\) Algorithm 5 is used to find an invariancy interval, i.e., the increment \(\tau \) from above 1/6, in which (46) remains a CBFS. To do so, polynomial equations, \(P_{i1}(\tau )=0\) for each \(i=1,2,3\) and \(P_2(\tau )=0\), are solved for their positive roots.

$$\begin{aligned} \begin{aligned} P_{i1}(\tau )&=\det \big (-T_{B}(\lambda )_{KK}+\tau ^{-1}\mathscr {I}\big )\big (T_{B}(\lambda )_{i0} \\&\quad -(-T_{B}(\lambda )_{iK})(T_{B}(\lambda )_{KK}+\tau ^{-1}\mathscr {I})^{-1}T_{B}(\lambda )_{K0}\big )&=0. \end{aligned} \end{aligned}$$
$$\begin{aligned} \begin{aligned} \text {For } i=1,\qquad \det \left( \begin{bmatrix} \tau ^{-1} &{} \quad 0\\ 0 &{} \quad \tau ^{-1} \end{bmatrix}\right) \left( 4/6-\begin{bmatrix} -2&\quad 0 \end{bmatrix} \begin{bmatrix} \tau ^{-1} &{} \quad 0\\ 0 &{} \quad \tau ^{-1} \end{bmatrix}^{-1} \begin{bmatrix} 5 \\ -2 \end{bmatrix} \right)&=0\\ \left( \frac{1}{\tau ^2}\right) \left( 4/6+10\tau \right)&=0\\ \alpha _1=\tau&= \infty . \end{aligned} \end{aligned}$$
$$\begin{aligned} \begin{aligned} \text {For } i=2,\qquad \det \left( \begin{bmatrix} \tau ^{-1} &{} \quad 0\\ 0 &{} \quad \tau ^{-1} \end{bmatrix}\right) \left( 0-\begin{bmatrix} 0&\quad -3 \end{bmatrix} \begin{bmatrix} \tau ^{-1} &{} \quad 0\\ 0 &{} \quad \tau ^{-1} \end{bmatrix}^{-1} \begin{bmatrix} 5 \\ -2 \end{bmatrix} \right)&=0\\ \left( \frac{1}{\tau ^2}\right) \left( -6\tau \right)&=0\\ \alpha _2=\tau&=\infty . \end{aligned} \end{aligned}$$
$$\begin{aligned} \begin{aligned} \text {For } i=3,\qquad \det \left( \begin{bmatrix} \tau ^{-1} &{} \quad 0\\ 0 &{} \quad \tau ^{-1} \end{bmatrix}\right) \left( 15-\begin{bmatrix} 0&\quad 0 \end{bmatrix} \begin{bmatrix} \tau ^{-1} &{} \quad 0\\ 0 &{} \quad \tau ^{-1} \end{bmatrix}^{-1} \begin{bmatrix} 5 \\ -2 \end{bmatrix} \right)&=0\\ \left( \frac{15}{\tau ^2}\right)&=0\\ \alpha _3=\tau&= \infty \end{aligned} \end{aligned}$$

Then \(\rho _1=\infty \). Equation \(P_2(\tau )=0\) is solved.

$$\begin{aligned} \begin{aligned} P_2(\tau )=\det \big (-T_{B}(\lambda )_{KK}+\tau ^{-1}\mathscr {I}\big )&=0 \end{aligned} \end{aligned}$$
$$\begin{aligned} \begin{aligned} \det \left( \begin{bmatrix} \tau ^{-1} &{} \quad 0\\ 0 &{} \quad \tau ^{-1} \end{bmatrix}\right)&=0\\ \left( \frac{1}{\tau ^2}\right)&=0 \end{aligned} \end{aligned}$$

Set \(\rho _2=\tau =\infty \). Find \(\rho =\min \{1-\lambda ,\rho _1,\rho _2\}=\min \{5/6,\infty ,\infty \}=1-\lambda =5/6\), and the invariancy interval for \(\lambda \) for the current CBFS (46) is [1/6, 1].

The starting tableau for the next iteration is constructed.

Iteration 4: Set \(\lambda =1\) in the initial tableau and obtain \(T_{B_0}(1)\):

Tableau \(T_{B_0}(1)\)

 

\(\pmb {\mathrm {C}}_{.0}\)

\( \pmb {\mathrm {C}}_{.6}\)

\(\pmb {\mathrm {C}}_{.7}\)

\(\pmb {\mathrm {C}}_{.8}\)

\(\pmb {\mathrm {C}}_{.9}\)

\(\pmb {\mathrm {C}}_{.10}\)

\(w_1\)

9

−6

0

−3

2

0

\(w_2\)

−5

0

−5

0

3

\(w_3\)

15

3

5

0

0

0

\(v_{t_1}\)

5

−2

0

0

0

0

\(v_{t_2}\)

−2

0

−3

0

0

0

The resulting \((T_{B_0}(1))_{2,0}=-5 < 0\) implying \(\mathcal {B}_{i0}\prec {\textbf {0}}\). Therefore the current basis \(B_0 = \{w_1, w_2, w_3\}\) is updated using Algorithm 7. Let \(r=\min \{i:(T_{B_0}(1))_{i,0}<0, i=1,2,3\}=2\). Then \((T_{B_0}(1))_{r,\bar{r}}=(T_{B_0}(0))_{2,7}=-14 <0\), and a diagonal pivot is performed.

Tableau \(T_{B_4}(1)\)

 

\(\pmb {\mathrm {C}}_{.0}\)

\( \pmb {\mathrm {C}}_{.6}\)

\(\pmb {\mathrm {C}}_{.2}\)

\(\pmb {\mathrm {C}}_{.8}\)

\(\pmb {\mathrm {C}}_{.9}\)

\(\pmb {\mathrm {C}}_{.10}\)

\(w_1\)

9

− 6

0

− 3

2

0

\(z_2\)

5/14

0

− 1/14

5/14

0

− 3/14

\(w_3\)

170/14

3

5

− 25/14

0

15/14

\(v_{t_1}\)

5

− 2

0

0

0

0

\(v_{t_2}\)

− 11/14

0

− 3

15/14

0

− 9/14

The resulting \((T_{B_2}(1))_{i0} \ge {0}\) for \(i= 1, 2, 3\), i.e., the obtained complementary basis \(B_4 = \{w_1, z_2, w_3\}\) is feasible for \(\lambda = 1\). A new CBFS is obtained for spLCP (16) for \(\lambda = 1\)

$$\begin{aligned} \begin{bmatrix} w_1\\ w_2\\ w_3 \end{bmatrix}=\begin{bmatrix} 9\\ 0\\ 170/14 \end{bmatrix}\qquad \begin{bmatrix} z_1\\ z_2\\ z_3 \end{bmatrix}=\begin{bmatrix} 0\\ 5/14\\ 0 \end{bmatrix}, \end{aligned}$$
(47)

and yields an efficient solution to BOQP (15)

$$\begin{aligned} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix}=\begin{bmatrix} 0 \\ 5/14 \end{bmatrix}. \end{aligned}$$

Since the entire parameter space has been examined, the spLCP method terminates.

1.4 A.4. Solution to the elastic net problem in Example (21)

Table 9 Invariancy regions (\(\mathcal{IR}\mathcal{}\)) and efficient solution functions for TOQP (20) with data (21) solved as (22) with the mpLCP method
Table 10 Invariancy regions (\(\mathcal{IR}\mathcal{}\)) and efficient solution functions for TOQP (20) with data (21)

1.5 A.5. Solution to the portfolio problem in Example (27)

Table 11 Invariancy regions (\(\mathcal{IR}\mathcal{}\)) and efficient solution functions for Example (27) scalarized with the modified hybrid method \((\theta = \theta ^{nor}, \epsilon = \epsilon ^{nor}\), and the parameter space \(\varTheta ^{nor} \times \varLambda ' \times \mathcal {E}^{nor})\)

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jayasekara, P.L.W., Pangia, A.C. & Wiecek, M.M. On solving parametric multiobjective quadratic programs with parameters in general locations. Ann Oper Res 320, 123–172 (2023). https://doi.org/10.1007/s10479-022-04975-y

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10479-022-04975-y

Keywords

Navigation