Skip to main content
Log in

Parsimonious Least Norm Approximation

  • Published:
Computational Optimization and Applications Aims and scope Submit manuscript

Abstract

A theoretically justifiable fast finite successive linear approximation algorithm is proposed for obtaining a parsimonious solutionto a corrupted linear system Ax=b+p, where the corruption p is due to noise or error in measurement. The proposedlinear-programming-based algorithm finds a solution x by parametrically minimizing the number of nonzeroelements in x and the error ‖Ax-b-p1.Numerical tests on a signal-processing-based exampleindicate that the proposed method is comparable to a method that parametrically minimizesthe 1-norm of the solution x and the error ‖Ax-b-p1, and that both methods are superior, byorders of magnitude, to solutions obtained by least squares as well by combinatorially choosing an optimal solution with a specific number of nonzero elements.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. T.J. Abatzoglou, J.M. Mendel, and G.A. Harada, “The constrained total least squares technique and its application to harmonic superposition,” IEEE Transactions on Signal Processing, vol. 39, pp. 1070-1087, 1991.

    Article  Google Scholar 

  2. E. Amaldi and V. Kann, “On the approximability of minimizing nonzero variables or unsatisfied relations in linear systems,” Technical Report 96-15, Cornell University, Ithaca, NY, 1996; to appear in Theoretical Computer Science. Available at http://www.cs.cornell.edu/Info/People/amaldi/amaldi.html.

    Google Scholar 

  3. K.P. Bennett and J.A. Blue, “A support vector machine approach to decision trees,” Department of Mathematical Sciences Math Report No. 97-100, Rensselaer Polytechnic Institute, Troy, NY 12180, 1997. Available at http://www.math.rpi.edu/ bennek/.

    Google Scholar 

  4. A. Blumer, A. Ehrenfeucht, D. Haussler, and M.K. Warmuth, “Occam's razor,” Information Processing Letters, vol. 24, pp. 377-380, 1987.

    Article  Google Scholar 

  5. P.S. Bradley, O.L. Mangasarian, and W.N. Street, “Clustering via concave minimization,” in Advances in Neural Information Processing Systems, M.C. Mozer, M.I. Jordan, and T. Petsche (Eds.), MIT Press: Cambridge, MA, 1997, vol. 9, pp. 368-364. ftp://ftp.cs.wisc.edu/math-prog/tech-reports/96-03.ps.Z.

    Google Scholar 

  6. P.S. Bradley, O.L. Mangasarian, and W.N. Street, “Feature selection via mathematical programming,” INFORMS Journal on Computing, 1998, to appear. Available at ftp://ftp.cs.wisc.edu/math-prog/techreports/95-21.ps.Z.

  7. CPLEX Optimization Inc., Incline Village, Nevada. Using the CPLEX(TM) Linear Optimizer and CPLEX(TM) Mixed Integer Optimizer (Version 2.0), 1992.

  8. Luc Dinh, Theory of Vector Optimization, Lecture Notes in Economics and Mathematical Systems 319. Springer Verlag: Berlin, 1989.

    Google Scholar 

  9. M. Frank and P. Wolfe, “An algorithm for quadratic programming,” Naval Research Logistics Quarterly, vol. 3, pp. 95-110, 1956.

    Google Scholar 

  10. Arthur A. Giordano, Least Square Estimation With Applications to Digital Signal Processing, John Wiley & Sons: New York, 1985.

    Google Scholar 

  11. G.H. Golub and C.F. Van Loan, “An analysis of the total least squares problem,” SIAM Journal on Numerical Analysis, vol. 17, pp. 883-893, 1980.

    Google Scholar 

  12. M.H. Hassoun, Fundamentals of Artificial Neural Networks, MIT Press: Cambridge, MA, 1995.

    Google Scholar 

  13. S. Van Huffel, H. Park, and J.B. Rosen, “Formulation and solution of structured total least norm problems for parameter estimation,” IEEE Transactions on Signal Processing, vol. 44, pp. 2464-2474, 1996.

    Article  Google Scholar 

  14. S. Van Huffel and J. Vandewalle, The Total Least Squares Problem, Computational Aspects and Analysis, SIAM: Philadelphia, PA, 1991.

    Google Scholar 

  15. C.L. Lawson and R.J. Hanson, Solving Least Squares Problems, Prentice Hall: Englewood Cliffs, NJ, 1974.

    Google Scholar 

  16. Y. Le Cun, J.S. Denker, and S.A. Solla, “Optimal brain damage,” in Advances in Neural Information Processing Systems II (Denver 1989), D.S. Touretzky (Ed.), Morgan Kaufmann: San Mateo, CA, 1990, pp. 598-605.

    Google Scholar 

  17. O.L. Mangasarian, “Misclassification minimization,” Journal of Global Optimization, vol. 5, pp. 309-323, 1994.

    Google Scholar 

  18. O.L. Mangasarian, “Machine learning via polyhedral concave minimization,” in Applied Mathematics and Parallel Computing-Festschrift for Klaus Ritter, H. Fischer, B. Riedmueller, and S. Schaeffler (Eds.), Physica-Verlag A Springer-Verlag Company: Heidelberg, 1996, pp. 175-188. Available at ftp://ftp.cs.wisc.edu/mathprog/tech-reports/95-20.ps.Z.

    Google Scholar 

  19. O.L. Mangasarian, “Mathematical programming in data mining,” Data Mining and Knowledge Discovery, vol. 1, no.2, pp. 183-201, 1997. Available at ftp://ftp.cs.wisc.edu/math-prog/tech-reports/96-05.ps.Z.

    Article  Google Scholar 

  20. O.L. Mangasarian, “Solution of general linear complementarity problems via nondifferentiable concave minimization,” Acta Mathematica Vietnamica, vol. 22, no.1, pp. 199-205, 1997. Available at ftp://ftp.cs.wisc.edu/math-prog/tech-reports/96-10.ps.Z.

    Google Scholar 

  21. MATLAB, User's Guide. The MathWorks, Inc., 1992.

  22. B.T. Polyak, Introduction to Optimization. Optimization Software Inc., Publications Division: New York, 1987.

    Google Scholar 

  23. R.T. Rockafellar, Convex Analysis, Princeton University Press: Princeton, NJ, 1970.

    Google Scholar 

  24. J. Ben Rosen, Haesun Park, and John Glick, “Total least norm formulation and solution for structured problems,” SIAM Journal on Matrix Analysis, vol. 17, no.1, pp. 110-128, January 1996.

    Google Scholar 

  25. C. Schaffer, “Overfitting avoidance as bias,” Machine Learning, vol. 10, pp. 153-178, 1993.

    Article  Google Scholar 

  26. J.W. Shavlik and T.G. Dietterich (Eds.), Readings in Machine Learning, Morgan Kaufman: San Mateo, CA, 1990.

    Google Scholar 

  27. G. Strang, Introduction to Linear Algebra, Wellesley-Cambridge Press: Wellesley, MA, 1993.

    Google Scholar 

  28. Charles W. Therrien, Discrete Random Signals and Statistical Signal Processing, Prentice-Hall: Englewood Cliffs, NJ, 1992.

    Google Scholar 

  29. V.N. Vapnik, The Nature of Statistical Learning Theory, Springer: New York, 1995.

    Google Scholar 

  30. D.H. Wolpert (Ed.), The Mathematics of Generalization, Addison Wesley: Reading, MA, 1995.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Bradley, P., Mangasarian, O. & Rosen, J. Parsimonious Least Norm Approximation. Computational Optimization and Applications 11, 5–21 (1998). https://doi.org/10.1023/A:1018361916442

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/A:1018361916442

Navigation