Abstract
In least squares problems, it is often desired to solve the same problem repeatedly but with several rows of the data either added, deleted, or both. Methods for quickly solving a problem after adding or deleting one row of data at a time are known. In this paper we introduce fundamental rank-k updating and downdating methods and show how extensions of rank-1 downdating methods based on LINPACK, Corrected Semi-Normal Equations (CSNE), and Gram-Schmidt factorizations, as well as new rank-k downdating methods, can all be derived from these fundamental results. We then analyze the cost of each new algorithm and make comparisons tok applications of the corresponding rank-1 algorithms. We provide experimental results comparing the numerical accuracy of the various algorithms, paying particular attention to the downdating methods, due to their potential numerical difficulties for ill-conditioned problems. We then discuss the computation involved for each downdating method, measured in terms of operation counts and BLAS calls. Finally, we provide serial execution timing results for these algorithms, noting preferable points for improvement and optimization. From our experiments we conclude that the Gram-Schmidt methods perform best in terms of numerical accuracy, but may be too costly for serial execution for large problems.
Similar content being viewed by others
References
S.T. Alexander, C.-T. Pan and R.J. Plemmons, Analysis of a recursive least squares hyperbolic rotation algorithm for signal processing, Lin. Alg. Appl. 98 (1988) 3–40.
E. Anderson, Z. Bai, C. Bishof, J. Demmel, J. Dongarra, J. Du Croz, A. Greenbaum, S. Hammarling, A. McKenney, S. Ostrouchov and D. SorensenLAPACK Users' Guide (SIAM Press, 1992).
Å. Björck, Stability analysis of the method of seminormal equations for linear least squares problems. Lin. Alg. Appl. 88–89 (1987) 31–48.
Å. Björck and C.C. Paige, Loss and recapture of orthogonality in the modified Gram-Schmidt algorithm. SIAM J. Matrix Anal. Appl. 13 (1992) 176–190.
Å. Björck, H. Park and L. Eldén, Accurate downdating of least squares solutions, Technical Report IMA Preprint Series 947, Institute for Mathematics and Its Applications, University of Minnesota (March 1992).
Å. Björck, Solving linear least squares problems by Gram-Schmidt orthogonalization, BIT 7 (1967) 1–21.
Å. Björck, Comment on the iterative refinement of least-squares solutions, J. Amer. Statist. Assoc. 73(361) (1978) 161–166.
Å. Björck, Error analysis of least squares algorithms, in:Numerical Linear Algebra, Digital Signal Processing and Parallel Algorithms, eds. G.H. Golub and P. Van Dooren (Springer, Berlin, 1991) pp. 41–73.
Å. Björck, Numerics of Gram-Schmidt orthogonalization, Technical Report LiTH-MAT-R-1992-50, Department of Mathematics, Linköping University (November 1992).
A.W. Bojanczyk, J.G. Nagy and R.J. Plemmons, Row Householder transformations for rank-k Cholesky inverse modifications, Technical Report IMA Preprint Series 978, Institute for Mathematics and Its Applications, University of Minnesota (May 1992).
A.W. Bojanczyk and A.O. Steinhardt, Stabilized hyperbolic Householder transformations, IEEE Trans. Acoust., Speech, Signal Proc. ASSP 37 (1989) 1286–1288.
J.W. Daniel, W.B. Gragg, L. Kaufman and G.W. Stewart, Reorthogonalization and stable algorithms for updating the Gram-Schmidt QR factorization, Math. Comp. 30(136) (1976) 772–795.
L. Foster, Modifications of the normal equations method that are numerically stable, in:Numerical Linear Algebra, Digital Signal Processing and Parallel Algorithms, eds. G.H. Golub and P. Van Dooren (Springer, Berlin, 1991) pp. 501–512.
P.E. Gill, G.H. Golub, W. Murray and M.A. Saunders, Methods of modifying matrix factorizations, Math. Comp. 28(126) (1974) 505–535.
G.H. Golub and C.F. Van Loan,Matrix Computations, 2nd ed. (Johns Hopkins University Press, 1989).
S. Haykin,Adaptive Filter Theory, 2nd ed. (Prentice-Hall, Englewood Cliffs, NJ, 1991).
W. Jalby and B. Philippe, Stability analysis and improvement of the block Gram-Schmidt algorithm, SIAM J. Sci. Statist. Comp. 12 (1991) 1058–1073.
M. Jankowski and H. Woźniakowski, Iterative refinement implies numerical stability, BIT 17 (1977) 303–311.
C.-T. Pan and R.J. Plemmons, Least squares modifications with inverse factorizations: parallel implications, J. Comp. Appl. Math. 27 (1989) 109–127.
B.N. Parlett,The Symmetric Eigenvalue Problem (Prentice-Hall, Englewood Cliffs, NJ, 1980).
C.M. Rader and A.O. Steinhardt, Hyperbolic Householder transformations, IEEE Trans. Acoust., Speech, Signal Proc. ASSP 34 (1986) 1589–1602.
M.A. Saunders, Large-scale linear programming using the Cholesky factorization, Technical Report STAN-CS-72-252, Computer Science Department, School of Humanities and Sciences, Stanford University (January 1972).
G.W. Stewart, The effects of rounding error on an algorithm for downdating a Cholesky factorization, J. Inst. Math. Appl. 23 (1979) 203–213.
J.H. Wilkinson, Some recent advances in numerical linear algebra, in:The State of the Art in Numerical Analysis, ed. D.A.H. Jacobs (Academic Press, New York, 1977) pp. 1–51.
Author information
Authors and Affiliations
Additional information
Communicated by Å. Björck
Research supported in part by the Joint Services Electronics Program, contract no. F49620-90-C-0039.
Rights and permissions
About this article
Cite this article
Olszanskyj, S.J., Lebak, J.M. & Bojanczyk, A.W. Rank-k modification methods for recursive least squares problems. Numer Algor 7, 325–354 (1994). https://doi.org/10.1007/BF02140689
Received:
Issue Date:
DOI: https://doi.org/10.1007/BF02140689