Abstract
Inspired by the hybrid algorithm proposed by Dax (Numer. Algor. 50, 97–114, 2009), we attempt to establish three kinds of strategies for solving the linear inequalities in a least squares sense. It can avoid inefficient iterations by active set prediction. Three different formulas are designed in terms of the identification property. A distinctive feature of our switching strategies is the adaptive estimates of the optimal active set. Extensive numerical experiments show the efficiency of hybrid algorithms, especially for the inconsistent system of linear inequalities.






Similar content being viewed by others
References
Bennett, K. P., Mangasarian, O. L.: Robust linear programming discrimination of two linearly inseparable sets. Optim. Methods Softw. 1, 23–34 (1992)
Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge university press, United Kingdom (2004)
Bramley, R., Winnicka, B.: Solving linear inequalities in a least squares sense. SIAM J. Sci. Comput. 17, 275–286 (1996)
Burke, J. V., Moré, J. J.: On the identification of active constraints. SIAM J. Numer. Anal. 25, 1197–1211 (1988)
Burke, J. V., Moré, J. J.: Exposing constraints. SIAM J. Optim. 4, 573–595 (1994)
Calamai, P. H., Moré, J. J.: Projected gradient methods for linearly constrained problems. Math. Program. 39, 93–116 (1987)
Censor, Y., Altschuler, M. D., Powlis, W. D.: A computational solution of the inverse problem in radiation-therapy treatment planning. Appl. Math. Comput. 25, 57–88 (1988)
Censor, Y., Ben-Israel, A., Xiao, Y., Galvin, J. M.: On linear infeasibility arising in intensity-modulated radiation therapy inverse planning. Linear Algebra Appl. 428, 1406–1420 (2008)
Cristianini, N., Taylor, J.S.: An Introduction to Support Vector Machines and Other Kernel-based Learning Methods. Cambridge University Press, United Kingdom (2000)
Dax, A.: On computational aspects of bounded linear least squares problems. ACM T. Math. Softw.. 17, 64–73 (1991)
Dax, A.: A hybrid algorithm for solving linear inequalities in a least squares sense. Numer. Algor. 50, 97–114 (2009)
Detrano, R., Janosi, A., Steinbrunn, W., Pfisterer, M., Schmid, J. J., Sandhu, S., Guppy, K. H., Lee, S., Froelicher, V.: International application of a new probability algorithm for the diagnosis of coronary artery disease. Amer. J. Cardiol. 64, 304–310 (1989)
Goberna, M. A., Hiriart Urruty, J. B., López, M. A.: Best approximate solutions of inconsistent linear inequality system. Vietnam J. Math. 46, 271–284 (2018)
Han, S. P.: Least Squares-Solution of Linear Inequalities. Technical Report 2141, Math. Res. Center, University of Wisconsin-Madison (1980)
Ketabchi, S., Salahi, M.: Correcting inconsistency in linear inequalities by minimal change in the right hand side vector. Sci. J. Moldova. 17, 179–192 (2009)
Lei, Y.: The inexact fixed matrix iteration for solving large linear inequalities in a least squares sense. Numer. Algor. 69, 227–251 (2015)
Li, W., Swetits, J.: A new algorithm for solving strictly convex quadratic programs. SIAM J. Optim. 7, 595–619 (1997)
Madsen, K., Nielsen, H. B., Pinar, M. C.: A finite continuation algorithm for bound constrained quadratic programming. SIAM J. Optim. 9, 62–83 (1999)
Madsen, K., Nielsen, H. B., Pinar, M. C.: Bound constrained quadratic programming via piecewise quadratic functions. Math. Prog. 85, 135–156 (1999)
Moré, J. J., Sorensen, D. C.: Computing a trust region step. SIAM. J. Sci. Stat. Comput. 4, 553–572 (1983)
Nocedal, J., Wright, S.: Numerical Optimization. Springer, New York (2004)
Pinar, M. C.: Newton’s method for linear inequality systems. Eur. J. Oper. Res. 107, 710–719 (1998)
Popa, C., Ṡerban, C.: Han-type algorithms for inconsistent systems of linear inequalities-A unified approach. Appl. Math. Comput. 246, 247–256 (2014)
Robinson, D. P., Feng, L., Nocedal, J. M., Pang, J. S.: Subspace accelerated matrix splitting algorithms for asymmetric and symmetric linear complementarity problems. SIAM J. Optim. 23, 1371–1397 (2013)
Smith, F. M.: Pattern classifier design by linear programming. IEEE Trans. Comput. 17, 367–372 (1968)
Vapnik, V., Kotz, S.: Estimation of Dependences Based on Empirical Data. Springer, New York (1982)
Acknowledgements
The authors would like to thank the two anonymous referees for the detailed comments and valuable suggestions, which have improved the presentation of the paper.
Funding
This work is supported by the National Science Foundation of China (No. 11871205).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare no competing interests.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
The work was supported by National Natural Science Foundations of China (No. 11871205).
Appendix:: Proofs of Corollaries 1–3
Appendix:: Proofs of Corollaries 1–3
1.1 1.1 Proof of Corollary 1
Proof
Let dk be the solution of (2.2) and by the assumption that A is full column rank, we have
For the next iteration xk+ 1 = xk + dk, the corresponding residual vector is
Theorem 2 implies the following fact for sufficiently large k that there exists a positive integer k0 such that
By simple recurrences, the result (36) holds for any positive integer ns. □
1.2 1.2 Proof of Corollary 2
Proof
For the actual reduction δLk defined in (38), it follows from the last equality in (19) that
Since \(\Vert b_{\mathcal {I}_{k-1}}-A_{\mathcal {I}_{k-1}}x_{k}\Vert ^{2} \geq \Vert (b_{\mathcal {I}_{k-1}}-A_{\mathcal {I}_{k-1}}x_{k})_{+}\Vert ^{2}\), the inequality (40) can be derived by (39) and (50).
For the sufficiently large k, the identification property of fixed matrix algorithm implies that there is a positive integer k0 such that
which implies that
and
holds for all positive integer k ≥ k0. Then, the equality (41) follows the definitions of δEk and δLk. □
1.3 1.3 Proof of Corollary 3
Proof
Since xk solves the least squares problem (18), then we have the normal equation
The Moore-Penrose pseudo-inverse of A can be expressed as A+ = (ATA)− 1AT due to the fact that A is full column rank, then it follows from (51) that
Let \(\bar {N}_{k-1}=I-N_{k-1}\), where Nk− 1 is the sign matrix defined in Corollary 1. Consequently, \(\bar {N}_{k-1}\) is the sign matrix of the vector Axk− 1 − b and
Theorem 2 shows that \( \bar {N}_{k-1}=\bar {N}_{k-2}\) holds for sufficiently large k. Moreover, by combining (52) with (53), we get
which yields
where ∥⋅∥ denotes the spectrum norm of the matrix \(A^{+}\bar {N}_{k-1}A\). The result in (42) is a direct consequence of the observation that the maximal eigenvalue of the positive semidefinite matrix \(A^{+}\bar {N}_{k-1}A\) is just 1.
We assume that the sequence {xk} satisfies (54) after a sufficiently large k0. For simplicity, we denote \( B=A^{+}\bar {N}_{k-1}A \), and we have
Since \(A^{+}\bar {N}_{k-1}A\) is symmetric matrix and can be diagonalizable, there exists an orthogonal matrix S such that B = STΛS, where Λ is a diagonal matrix consisting of the eigenvalues λ1,...,λn. By denoting SΔ0 = (ν1,⋯ ,νn)T, we have
Hence, we can obtain the ratio of contraction
By Cauchy-Schwarz inequality
we immediately get ck+ 1 ≥ ck. Hence, the contraction ck keeps increasing until it approaches 1. □
Rights and permissions
About this article
Cite this article
Li, B., Lei, Y. Hybrid algorithms with active set prediction for solving linear inequalities in a least squares sense. Numer Algor 90, 1327–1356 (2022). https://doi.org/10.1007/s11075-021-01232-4
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11075-021-01232-4