Skip to main content

Advertisement

A Surrogate Hyperplane Bregman–Kaczmarz Method for Solving Linear Inverse Problems

  • Published:
Journal of Scientific Computing Aims and scope Submit manuscript

Abstract

Linear inverse problems arise in many practical applications. In the present work, we propose a residual-based surrogate hyperplane Bregman-Kaczmarz method (RSHBK) for solving this kind of problems. The convergence theory of the proposed method is investigated detailedly. When the data is contaminated by the independent noise, which means the observed measurement at each new iteration in the algorithm is refreshed with noise which is new and independent of that in the previous iterations, an adaptive version of our RSHBK method is developed. An adaptive relaxation parameter is derived for optimizing the bound on the expectation error. We demonstrate the efficiency of our proposed methods for both noise-free and independent noise problems by comparing with other state-of-the-art Kaczmarz methods in terms of computation time and convergence rate through synthetic experiments and real-world applications.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Algorithm 1
Algorithm 2
Algorithm 3
Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Data Availability

The data that support the findings of this study are available upon reasonable request from the authors.

References

  1. Chen, S.S.-B., Donoho, D.L., Saunders, M.A.: Atomic decomposition by basis pursuit. SIAM Rev. 43(1), 129–159 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  2. Donoho, D.L.: Compressed sensing. IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  3. Cai, J.-F., Osher, S., Shen, Z.-W.: Convergence of the linearized Bregman iteration for \(\ell _1\)-norm minimization. Math. Comput. 78(268), 2127–2136 (2009)

    Article  MATH  Google Scholar 

  4. Cai, J.-F., Osher, S., Shen, Z.-W.: Linearized Bregman iterations for compressed sensing. Math. Comput. 78(267), 1515–1536 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  5. Yin, W.-T., Osher, S., Goldfarb, D., Darbon, J.: Bregman iterative algorithms for \(\ell _1\)-minimization with applications to compressed sensing. SIAM J. Imag. Sci. 1(1), 143–168 (2008)

    Article  MATH  Google Scholar 

  6. Elad, M.: Sparse and Redundant Representations: From Theory to Applications in Signal and Image Processing. Springer Science & Business Media, Berlin (2010)

    Book  MATH  Google Scholar 

  7. Liang, D., Cheng, J., Ke, Z.-W., Ying, L.: Deep magnetic resonance image reconstruction: inverse problems meet neural networks. IEEE Signal Process. Mag. 37(1), 141–151 (2020)

    Article  MATH  Google Scholar 

  8. Adler, J., Öktem, O.: Solving ill-posed inverse problems using iterative deep neural networks. Inverse Prob. 33(12), 124007 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  9. Arridge, S., Maass, P., Öktem, O., Schönlieb, C.B.: Solving inverse problems using data-driven models. Acta Numer. 28, 1–174 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  10. Benning, M., Burger, M.: Modern regularization methods for inverse problems. Acta Numer. 27, 1–111 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  11. Kaczmarz, S.: Angenäherte auflösung von systemen linearer gleichungen. Bull. Int. Acad. Pol. Sci. Lett. 35, 335–357 (1937)

    MATH  Google Scholar 

  12. Strohmer, T., Vershynin, R.: A randomized Kaczmarz algorithm with exponential convergence. J. Fourier Anal. Appl. 15(2), 262–278 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  13. Tondji, L., Lorenz, D. A., Necoara, I.: An accelerated randomized Bregman-Kaczmarz method for strongly convex linearly constraint optimization, in: Proceedings of the 2023 European Control Conference (ECC), IEEE, pp 1–6, (2023)

  14. Schöpfer, F., Lorenz, D.A.: Linear convergence of the randomized sparse Kaczmarz method. Math. Program. 173, 509–536 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  15. Lorenz, D. A., Wenger, S., Schöpfer, F., Magnor, M.: A sparse Kaczmarz solver and a linearized Bregman method for online compressed sensing, in: Proceedings of the 2014 IEEE International Conference on Image Processing (ICIP), pp 1347–1351, (2014)

  16. Tondji, L., Tondji, I., Lorenz, D.A.: Adaptive Bregman-Kaczmarz: an approach to solve linear inverse problems with independent noise exactly. Inverse Prob. 40(9), 095006 (2024)

    Article  MathSciNet  MATH  Google Scholar 

  17. Tondji, L., Necoara, I., Lorenz, D.A.: Acceleration and restart for the randomized Bregman-Kaczmarz method. Linear Algebra Appl. 699, 508–538 (2024)

    Article  MathSciNet  MATH  Google Scholar 

  18. Lorenz, D.A., Schöpfer, F., Wenger, S.: The linearized Bregman method via split feasibility problems: analysis and generalizations. SIAM J. Imag. Sci. 7(2), 1237–1262 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  19. Petra, S.: Randomized sparse block Kaczmarz as randomized dual block-coordinate descent. Analele Ştiinţifice ale Univ. Ovidius Constanţa Seria Matematică 23(3), 129–149 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  20. Zhang, L., Yuan, Z.-Y., Wang, H.-X., Zhang, H.: A weighted randomized sparse Kaczmarz method for solving linear systems. Comput. Appl. Math. 41(8), 383 (2022)

    Article  MathSciNet  MATH  Google Scholar 

  21. Yuan, Z.-Y., Zhang, L., Wang, H.-X., Zhang, H.: Adaptively sketched Bregman projection methods for linear systems. Inverse Prob. 38(6), 065005 (2022)

    Article  MathSciNet  MATH  Google Scholar 

  22. Tondji, L., Lorenz, D.A.: Faster randomized block sparse Kaczmarz by averaging. Numer. Algorithms 93(4), 1417–1451 (2023)

    Article  MathSciNet  MATH  Google Scholar 

  23. Lorenz, D. A., Winkler, M.: Minimal error momentum Bregman-Kaczmarz. arXiv preprint arXiv:2307.15435, (2023)

  24. Yun, Z., Han, D., Su, Y.-S., Xie, J.-X.: Fast stochastic dual coordinate descent algorithms for linearly constrained convex optimization. arXiv preprint arXiv:2307.16702, (2023)

  25. Marshall, N.F., Mickelin, O.: An optimal scheduled learning rate for a randomized Kaczmarz algorithm. SIAM J. Matrix Anal. Appl. 44, 312–330 (2023)

    Article  MathSciNet  MATH  Google Scholar 

  26. Rockafellar, R. T., Wets, R. J. B.: Variational analysis springer, MR1491362, (1998)

  27. Bregman, L.M.: The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming. USSR Comput. Math. Math. Phys. 7(3), 200–217 (1967)

    Article  MathSciNet  MATH  Google Scholar 

  28. Schöpfer, F., Lorenz, D.A., Tondji, L., Winkler, M.: Extended randomized Kaczmarz method for sparse least squares and impulsive noise problems. Linear Algebra Appl. 652, 132–154 (2022)

    Article  MathSciNet  MATH  Google Scholar 

  29. Greub, W., Rheinboldt, W.: On a generalization of an inequality of L. V. Kantorovich, in: Proceedings of the American Mathematical Society, vol. 10, pp 407–415, (1959)

  30. Wang, Z., Yin, J.-F., Zhao, J.-C.: The sparse Kaczmarz method with surrogate hyperplane for the regularized basis pursuit problem. J. Comput. Appl. Math. 454, 116182 (2025)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Funding

This work was supported by the National Natural Science Foundation of China (Grant No. 11971354) and the Science and Technology Innovation Commission of Shenzhen (Grant No. 20220809161224001).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jun-Feng Yin.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix A: Proof for Theorem 3.4

Appendix A: Proof for Theorem 3.4

Here, we first state a lemma to get the Kantorovich inequality which is used in the proof of Theorem 3.4.

Lemma A.1

[29] Given a linear and self-adjoint operator B of the Hilbert space \(\mathcal {H}\). If the real numbers mM and the operator B fulfill the condition

$$\begin{aligned} 0<mE\le B\le ME, \quad \forall x\in \mathcal {H}. \end{aligned}$$

where E is the identity operator in \(\mathcal {H}\), then for all \(x\in \mathcal {H}\),

$$\begin{aligned} x^\top Bxx^\top B^{-1}x\le \frac{(M+m)^2}{4mM}(x^\top x)^2. \end{aligned}$$

Now, it is ready to prove Theorem 3.4.

Proof

Denote \(e_k=x^*_k-\hat{x}\), \(\hat{x}\) is the solution of the \(Ax=b\), then

$$\begin{aligned} e_{k+1}-e_k = x^*_{k+1} -x^*_k = \eta _k \frac{ r^{\top }_{k} r_{k} }{ \Vert A^{\top } r_k\Vert _ { 2 } ^ { 2 } } A^{\top } r_k. \end{aligned}$$

so it holds that

$$\begin{aligned} \Vert e_{k+1}-e_k\Vert ^2_2 = \eta _k^2 \frac{ \Vert r_{k}\Vert ^4_2 }{ \Vert A^{\top } r_k\Vert _ { 2 } ^ { 2 } }. \end{aligned}$$
(26)

Furthermore, by the definition of \(e_k\), it follows that

$$\begin{aligned} \begin{aligned}&e_{k+1} =e_k-\frac{{e_k}^{\top } A^{\top } Ae_{k} }{{e_k}^{\top } A^{\top } AA^{\top } Ae_{k} }\cdot A^{\top } Ae_{k},\\&e_k - e_{k+1}=\frac{A^\top Ae_k{e_k}^\top A^\top A}{{e_k}^\top A^\top AA^\top Ae_k} e_k. \end{aligned} \end{aligned}$$

Let \(P_k=\frac{A^\top Ae_k{e_k}^\top A^\top A}{{e_k}^\top A^\top AA^\top Ae_k}\), it is obvious that \(P_k\) satisfies \(P_k^2=P_k\) and \(P_k^\top =P_k\), so \(P_k\) is a projection matrix, it is obtained that

$$\begin{aligned} \begin{aligned} \Vert e_k-e_{k+1}\Vert _2^2&=\Vert P_k e_k\Vert ^2_2\\&={e_k}^\top P_k^\top P_k\, e_k\\&={e_k}^\top P_k\, e_k\\&={e_k}^\top \frac{A^\top Ae_k{e_k}^\top A^\top A}{{e_k}^\top A^\top AA^\top Ae_k}e_k\\&=\frac{\left( {e_k}^\top A^\top Ae_k\right) ^2}{{e_k}^\top A^\top AA^\top Ae_k e_k^\top e_k}\Vert e_k\Vert ^2_2. \end{aligned} \end{aligned}$$

Denote

$$\begin{aligned} q=\frac{\left( {e_k}^\top A^\top Ae_k\right) ^2}{{e_k}^\top A^\top AA^\top Ae_k e_k^\top e_k}. \end{aligned}$$

We consider the accurate lower bound of q. Due to \(A^\top A\) is a symmetric positive semidefinite matrix, according to Lemma A.1 Kantorovich inequality, take \(A e_k\) as x and take \(AA^\top \) as B. Then, it is derived that

$$\begin{aligned} q \ge \frac{4\sigma ^2_{\min }\left( A\right) \sigma ^2_{\max }\left( A\right) }{\left( \sigma ^2_{\min } \left( A\right) +\sigma ^2_{\max } \left( A\right) \right) ^2}. \end{aligned}$$

Therefore,

$$\begin{aligned} \begin{aligned} \Vert e_{k+1}-e_k\Vert ^2_2&\ge \frac{4\sigma ^2_{\min }\left( A\right) \sigma ^2_{\max }\left( A\right) }{\left( \sigma ^2_{\min } \left( A\right) +\sigma ^2_{\max } \left( A\right) \right) ^2} \Vert e_k\Vert _2^2 \\&= 4\cdot \frac{1}{\left( \kappa (A)+\frac{1}{\kappa (A)}\right) ^2}\Vert e_k\Vert _2^2. \end{aligned} \end{aligned}$$
(27)

Combining equation () and inequality (27), it follows that

$$\begin{aligned} \frac{ \Vert r_{k}\Vert ^4_2 }{ \Vert A^{\top } r_k\Vert _ { 2 } ^ { 2 } } \ge 4\cdot \frac{1}{\eta _k^2 \left( \kappa (A)+\frac{1}{\kappa (A)}\right) ^2}\Vert e_k\Vert _2^2. \end{aligned}$$

Hence, it holds that

$$\begin{aligned} \begin{aligned} D_f^{x_{k+1}^*}\left( x_{k+1}, \hat{x}\right)&\le D_f^{x_k^*}\left( x_k, \hat{x}\right) -\frac{\eta _k \left( 2\alpha -\eta _k\right) }{2\alpha } \frac{\Vert r_k \Vert _2^4}{\Vert A^{\top } r_k \Vert _2^2} \\&\le D_f^{x_k^*}\left( x_k, \hat{x}\right) - \frac{4\alpha -2\eta _k}{\alpha \eta _k} \frac{1}{\left( \kappa (A)+\frac{1}{\kappa (A)}\right) ^2}\Vert x_k^*-\hat{x}\Vert ^2_2, \end{aligned} \end{aligned}$$

where the first inequality follows from the inequality (5). \(\square \)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Dong, Z., Wang, Z., Yin, G. et al. A Surrogate Hyperplane Bregman–Kaczmarz Method for Solving Linear Inverse Problems. J Sci Comput 102, 7 (2025). https://doi.org/10.1007/s10915-024-02737-5

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10915-024-02737-5

Keywords