Abstract
The conjugate gradient (CG) algorithm is the most frequently used iterative method for solving linear systems Ax = b with a symmetric positive definite (SPD) matrix. In this paper we construct real symmetric positive definite matrices A of order n and real right-hand sides b for which the CG algorithm has a prescribed residual norm convergence curve. We also consider prescribing as well the A-norms of the error. We completely characterize the tridiagonal matrices constructed by the Lanczos algorithm and their inverses in terms of the CG residual norms and A-norms of the error. This also gives expressions and lower bounds for the ℓ2 norm of the error. Finally, we study the problem of prescribing both the CG residual norms and the eigenvalues of A. We show that this is not always possible. Our constructions are illustrated by numerical examples.








Similar content being viewed by others
References
Arioli, M., Pták, V., Strakoš, Z.: Krylov sequences of maximal length and convergence of GMRES. BIT Numer. Math. 38(4), 636–643 (1998)
Arnoldi, W.E.: The principle of minimized iterations in the solution of the matrix eigenvalue problem. Quart. Appl. Math. 9, 17–29 (1951)
Baranger, J., Duc-Jacquet, M.: Matrices tridiagonales symétriques et matrices factorisables. RIRO 3, 61–66 (1971)
Duintjer Tebbens, J., Meurant, G.: Any Ritz value behavior is possible for Arnoldi and for GMRES. SIAM J. Matrix Anal. Appl. 33(3), 958–978 (2012)
Duintjer Tebbens, J., Meurant, G.: Prescribing the behavior of early terminating GMRES and Arnoldi iterations. Numer. Algorithms 65(1), 69–90 (2014)
Duintjer Tebbens, J., Meurant, G.: On the convergence of Q-OR and Q-MR Krylov methods for solving linear systems. BIT Numer. Math. 56(1), 77–97 (2016)
Fiedler, M.: Polynomials and Hankel matrices. Linear Algebra Appl. 66, 235–248 (1985)
Golub, G.H., Meurant, G: Matrices, Moments and Quadrature with Applications. Princeton University Press (2010)
Greenbaum, A., Strakoš, Z.: Matrices that generate the same Krylov residual spaces. In: Golub, G.H., Greenbaum, A., Luskin, M. (eds.) Recent Advances in Iterative Methods, pp 95–118. Springer (1994)
Greenbaum, A., Pták, V., Strakoš, Z.: Any convergence curve is possible for GMRES. SIAM J. Matrix Anal. Appl. 17(3), 465–470 (1996)
Hestenes, M.R., Stiefel, E.: Methods of conjugate gradients for solving linear systems. J. Nat. Bur. Standards 49(6), 409–436 (1952)
Ikebe, Y.: On inverses of Hessenberg matrices. Linear Algebra Appl. 24, 93–97 (1979)
Juárez-Ruiz, E., Cortés-Maldonado, R., Pérez-Rodríguez, F.: Relationship between the inverses of a matrix and a submatrix. Computación y Sistemas 20(2), 251–262 (2016)
Lanczos, C.: Solution of systems of linear equations by minimized iterations. J. Res. Nat. Bur. Standards 49, 33–53 (1952)
Liesen, J., Strakoš, Z: Krylov Subspace Methods, Principles and Analysis. Oxford University Press (2013)
Meurant, G.: A review on the inverse of tridiagonal and block tridiagonal symmetric matrices. SIAM J. Matrix Anal. Appl. 13(3), 707–728 (1992)
Meurant, G.: Numerical experiments in computing bounds for the norm of the error in the preconditioned conjugate gradient algorithm. Numer. Algorithms 22, 353–365 (1999)
Meurant, G.: The Lanczos and Conjugate Gradient Algorithms from Theory to Finite Precision Computations. SIAM, Philadelphia (2006)
Meurant, G., Strakoš, Z.: The Lanczos and conjugate gradient algorithms in finite precision arithmetic. Acta Numerica 15, 471–542 (2006)
Paige, C.C.: The Computation of Eigenvalues and Eigenvectors of Very Large Sparse Matrices. University of London, Ph.D. thesis (1971)
Paige, C.C., Saunders, M.A.: Solution of sparse indefinite systems of linear equations. SIAM J. Numer. Anal. 12(4), 617–629 (1975)
Saad, Y.: Krylov subspace methods for solving large nonsymmetric linear systems. Math. Comp. 37, 105–126 (1981)
Saad, Y.: Practical use of some Krylov subspace methods for solving indefinite and nonsymmetric linear systems. SIAM J. Sci. Statist. Comput. 5(1), 203–228 (1984)
Schweitzer, M.: Any finite convergence curve is possible in the initial iterations of restarted FOM. Electron. Trans. Numer. Anal. 45, 133–145 (2016)
Strakoš, Z., Tichý, P.: On error estimates in the conjugate gradient method and why it works in finite precision computations. Electron. Trans. Numer. Anal. 13, 56–80 (2002)
Acknowledgments
Many thanks to Erin Carson and Jurjen Duintjer Tebbens for interesting comments and suggestions and to Petr Tichý for comments and suggesting to use relation (2). The author thanks the referees for their detailed comments.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Meurant, G. On prescribing the convergence behavior of the conjugate gradient algorithm. Numer Algor 84, 1353–1380 (2020). https://doi.org/10.1007/s11075-019-00851-2
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11075-019-00851-2