Abstract
Linear inverse problems arise in many practical applications. In the present work, we propose a residual-based surrogate hyperplane Bregman-Kaczmarz method (RSHBK) for solving this kind of problems. The convergence theory of the proposed method is investigated detailedly. When the data is contaminated by the independent noise, which means the observed measurement at each new iteration in the algorithm is refreshed with noise which is new and independent of that in the previous iterations, an adaptive version of our RSHBK method is developed. An adaptive relaxation parameter is derived for optimizing the bound on the expectation error. We demonstrate the efficiency of our proposed methods for both noise-free and independent noise problems by comparing with other state-of-the-art Kaczmarz methods in terms of computation time and convergence rate through synthetic experiments and real-world applications.
Similar content being viewed by others
Data Availability
The data that support the findings of this study are available upon reasonable request from the authors.
References
Chen, S.S.-B., Donoho, D.L., Saunders, M.A.: Atomic decomposition by basis pursuit. SIAM Rev. 43(1), 129–159 (2001)
Donoho, D.L.: Compressed sensing. IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006)
Cai, J.-F., Osher, S., Shen, Z.-W.: Convergence of the linearized Bregman iteration for \(\ell _1\)-norm minimization. Math. Comput. 78(268), 2127–2136 (2009)
Cai, J.-F., Osher, S., Shen, Z.-W.: Linearized Bregman iterations for compressed sensing. Math. Comput. 78(267), 1515–1536 (2009)
Yin, W.-T., Osher, S., Goldfarb, D., Darbon, J.: Bregman iterative algorithms for \(\ell _1\)-minimization with applications to compressed sensing. SIAM J. Imag. Sci. 1(1), 143–168 (2008)
Elad, M.: Sparse and Redundant Representations: From Theory to Applications in Signal and Image Processing. Springer Science & Business Media, Berlin (2010)
Liang, D., Cheng, J., Ke, Z.-W., Ying, L.: Deep magnetic resonance image reconstruction: inverse problems meet neural networks. IEEE Signal Process. Mag. 37(1), 141–151 (2020)
Adler, J., Öktem, O.: Solving ill-posed inverse problems using iterative deep neural networks. Inverse Prob. 33(12), 124007 (2017)
Arridge, S., Maass, P., Öktem, O., Schönlieb, C.B.: Solving inverse problems using data-driven models. Acta Numer. 28, 1–174 (2019)
Benning, M., Burger, M.: Modern regularization methods for inverse problems. Acta Numer. 27, 1–111 (2018)
Kaczmarz, S.: Angenäherte auflösung von systemen linearer gleichungen. Bull. Int. Acad. Pol. Sci. Lett. 35, 335–357 (1937)
Strohmer, T., Vershynin, R.: A randomized Kaczmarz algorithm with exponential convergence. J. Fourier Anal. Appl. 15(2), 262–278 (2009)
Tondji, L., Lorenz, D. A., Necoara, I.: An accelerated randomized Bregman-Kaczmarz method for strongly convex linearly constraint optimization, in: Proceedings of the 2023 European Control Conference (ECC), IEEE, pp 1–6, (2023)
Schöpfer, F., Lorenz, D.A.: Linear convergence of the randomized sparse Kaczmarz method. Math. Program. 173, 509–536 (2019)
Lorenz, D. A., Wenger, S., Schöpfer, F., Magnor, M.: A sparse Kaczmarz solver and a linearized Bregman method for online compressed sensing, in: Proceedings of the 2014 IEEE International Conference on Image Processing (ICIP), pp 1347–1351, (2014)
Tondji, L., Tondji, I., Lorenz, D.A.: Adaptive Bregman-Kaczmarz: an approach to solve linear inverse problems with independent noise exactly. Inverse Prob. 40(9), 095006 (2024)
Tondji, L., Necoara, I., Lorenz, D.A.: Acceleration and restart for the randomized Bregman-Kaczmarz method. Linear Algebra Appl. 699, 508–538 (2024)
Lorenz, D.A., Schöpfer, F., Wenger, S.: The linearized Bregman method via split feasibility problems: analysis and generalizations. SIAM J. Imag. Sci. 7(2), 1237–1262 (2014)
Petra, S.: Randomized sparse block Kaczmarz as randomized dual block-coordinate descent. Analele Ştiinţifice ale Univ. Ovidius Constanţa Seria Matematică 23(3), 129–149 (2015)
Zhang, L., Yuan, Z.-Y., Wang, H.-X., Zhang, H.: A weighted randomized sparse Kaczmarz method for solving linear systems. Comput. Appl. Math. 41(8), 383 (2022)
Yuan, Z.-Y., Zhang, L., Wang, H.-X., Zhang, H.: Adaptively sketched Bregman projection methods for linear systems. Inverse Prob. 38(6), 065005 (2022)
Tondji, L., Lorenz, D.A.: Faster randomized block sparse Kaczmarz by averaging. Numer. Algorithms 93(4), 1417–1451 (2023)
Lorenz, D. A., Winkler, M.: Minimal error momentum Bregman-Kaczmarz. arXiv preprint arXiv:2307.15435, (2023)
Yun, Z., Han, D., Su, Y.-S., Xie, J.-X.: Fast stochastic dual coordinate descent algorithms for linearly constrained convex optimization. arXiv preprint arXiv:2307.16702, (2023)
Marshall, N.F., Mickelin, O.: An optimal scheduled learning rate for a randomized Kaczmarz algorithm. SIAM J. Matrix Anal. Appl. 44, 312–330 (2023)
Rockafellar, R. T., Wets, R. J. B.: Variational analysis springer, MR1491362, (1998)
Bregman, L.M.: The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming. USSR Comput. Math. Math. Phys. 7(3), 200–217 (1967)
Schöpfer, F., Lorenz, D.A., Tondji, L., Winkler, M.: Extended randomized Kaczmarz method for sparse least squares and impulsive noise problems. Linear Algebra Appl. 652, 132–154 (2022)
Greub, W., Rheinboldt, W.: On a generalization of an inequality of L. V. Kantorovich, in: Proceedings of the American Mathematical Society, vol. 10, pp 407–415, (1959)
Wang, Z., Yin, J.-F., Zhao, J.-C.: The sparse Kaczmarz method with surrogate hyperplane for the regularized basis pursuit problem. J. Comput. Appl. Math. 454, 116182 (2025)
Funding
This work was supported by the National Natural Science Foundation of China (Grant No. 11971354) and the Science and Technology Innovation Commission of Shenzhen (Grant No. 20220809161224001).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix A: Proof for Theorem 3.4
Appendix A: Proof for Theorem 3.4
Here, we first state a lemma to get the Kantorovich inequality which is used in the proof of Theorem 3.4.
Lemma A.1
[29] Given a linear and self-adjoint operator B of the Hilbert space \(\mathcal {H}\). If the real numbers m, M and the operator B fulfill the condition
where E is the identity operator in \(\mathcal {H}\), then for all \(x\in \mathcal {H}\),
Now, it is ready to prove Theorem 3.4.
Proof
Denote \(e_k=x^*_k-\hat{x}\), \(\hat{x}\) is the solution of the \(Ax=b\), then
so it holds that
Furthermore, by the definition of \(e_k\), it follows that
Let \(P_k=\frac{A^\top Ae_k{e_k}^\top A^\top A}{{e_k}^\top A^\top AA^\top Ae_k}\), it is obvious that \(P_k\) satisfies \(P_k^2=P_k\) and \(P_k^\top =P_k\), so \(P_k\) is a projection matrix, it is obtained that
Denote
We consider the accurate lower bound of q. Due to \(A^\top A\) is a symmetric positive semidefinite matrix, according to Lemma A.1 Kantorovich inequality, take \(A e_k\) as x and take \(AA^\top \) as B. Then, it is derived that
Therefore,
Combining equation () and inequality (27), it follows that
Hence, it holds that
where the first inequality follows from the inequality (5). \(\square \)
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Dong, Z., Wang, Z., Yin, G. et al. A Surrogate Hyperplane Bregman–Kaczmarz Method for Solving Linear Inverse Problems. J Sci Comput 102, 7 (2025). https://doi.org/10.1007/s10915-024-02737-5
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10915-024-02737-5