Skip to main content
Log in

Block Decomposition Methods for Total Variation by Primal–Dual Stitching

  • Published:
Journal of Scientific Computing Aims and scope Submit manuscript

Abstract

Due to the advance of image capturing devices, huge size of images are available in our daily life. As a consequence the processing of large scale image data is highly demanded. Since the total variation (TV) is kind of de facto standard in image processing, we consider block decomposition methods for TV based variational models to handle large scale images. Unfortunately, TV is non-separable and non-smooth and it thus is challenging to solve TV based variational models in a block decomposition. In this paper, we introduce a primal–dual stitching (PDS) method to efficiently process the TV based variational models in the block decomposition framework. To characterize TV in the block decomposition framework, we only focus on the proximal map of TV function. Empirically, we have observed that the proposed PDS based block decomposition framework outperforms other state-of-art methods such as Bregman operator splitting based approach in terms of computational speed.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

Notes

  1. It is inspired from an image stitching method in [25].

  2. As shown in Theorem 3, the sequential method (i.e., BD-S) with \({\textit{subMaxIt}}=1\) and \(\omega =1\) corresponds to the method (i.e., PLAD) without using block decomposition.

    Fig. 3
    figure 3

    For performance comparison of PDS and BDD for \(2\times 2\) block decomposition, we use \(256 \times 256\) synthetic image (left). The center of \(2\times 2\) block boundary (128, 128) is shifted by (dxdy), where \(-10\le dx \le 10\) and \(-10 \le dy \le 10\) (right). See Figs. 4 and 5, and Table 1 for test results

    Fig. 4
    figure 4

    Performance comparison of PDS and BDD for \(2\times 2\) block decomposition (see Fig. 3). The PSNR and Obj values at each pixel location corresponds to the shifted center (dxdy) with \(-10\le dx \le 10\) and \(-10 \le dy \le 10\). For BDD, we use one BOS iteration. The PDS based model obtains smaller variation in PSNR and Obj value than the BOS based method. Note that PSNR(dB)s and Obj values of a sequential PDS, b parallel PDS, c sequential BDD, and d parallel BDD, respectively

    Fig. 5
    figure 5

    Performance of BDD versus the number of BOS iterations. a and c the averaged value of 25 different cases (i.e., \(-2\le dx \le 2\) and \(-2\le dy \le 2\)) in Fig. 3. b and d the variance of each case. As we increase the number of BOS iterations, the PSNR and Objective value are getting better. To obtain comparable performance to the proposed PDS, we need more than 15 BOS iterations. However, as observed in b and d, the variance is still larger than the PDS based model

  3. Note that one iteration condition is used in coordinate optimization only with primal variable to find a solution of fused Lasso problem [10].

  4. Note that the original PLAD algorithm in [27] linearizes fidelity term and augmented term at the same time. However, since the fidelity term of the proxTV model (1) is a simple proximal term, the original PLAD method is up to constant equal to (40), which we called PLAD in this paper.

References

  1. Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2, 183–202 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  2. Beck, A., Teboulle, M.: Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems. IEEE Trans. Image Process. 18, 2419–2434 (2009)

    Article  MathSciNet  Google Scholar 

  3. Bertsekas, D.P., Tsitsiklis, J.N.: Parallel and Distributed Computation. Prentice Hall, Upper Saddle River (1989)

    MATH  Google Scholar 

  4. Carstensen, C.: Domain decomposition for a non-smooth convex minimization problems and its application to plasticity. Numer. Linear Algebra Appl. 4, 177–190 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  5. Combettes, P., Wajs, W.: Signal recovery by proximal forward–backward splitting. Multiscale Model. Simul. 4, 1168–1200 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  6. Chambolle, A., Pock, T.: A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis. 40, 120–145 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  7. Chan, T.F., Shen, J.: Image Processing and Analysis. SIAM, Philadelphia (2005)

    Book  MATH  Google Scholar 

  8. Chang, H., Tai, X.-C., Wang, L.-L., Yang, D.: Convergence rate of overlapping domain decomposition methods for the Rudin–Osher–Fatemi model based on a dual formulation. SIAM J. Imaging Sci. 8, 564–591 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  9. Esser, E.: Applications of Lagrangian-based alternating direction methods and connections to split Bregman. UCLA Cam Report, 9, 31 (2009)

  10. Friedman, J., Hastie, T., Höfling, H., Tibshirani, R.: Pathwise coordinate optimization. Ann. Appl. Stat. 1, 302–332 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  11. Fornasier, M.: Domain decomposition methods for linear inverse problems with sparsity constraints. Inverse Probl. 23, 2505–2526 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  12. Fornasier, M., Langer, A., Schönlieb, C.-B.: A convergent overlapping domain decomposition method for total variation minimization. Numer. Math. 116, 645–685 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  13. Fornasier, M., Schönlieb, C.-B.: Subspace correction methods for total variation and l1-minimization. SIAM J. Numer. Anal. 47, 3397–3428 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  14. Goldstein, T., Osher, S.: The split Bregman method for l1 regularized problems. SIAM J. Imaging Sci. 2, 323–343 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  15. Hageman, L.A., Porsching, T.A.: Aspects of nonlinear block successive overrelaxation. SIAM J. Numer. Anal. 2, 316–335 (1975)

    Article  MathSciNet  MATH  Google Scholar 

  16. He, B., Liao, L.Z., Han, D., Yang, H.: A new inexact alternating directions method for monotone variational inequalities. Math. Program. 92, 103–118 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  17. Hintermüller, M., Langer, A.: Subspace correction methods for a class of non-smooth and non-additive convex variational problems with mixed \(L^{1}/L^{2}\) data-fidelity in image processing. SIAM J. Imaging Sci. 6, 2134–2173 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  18. Hintermüller, M., Langer, A.: Non-overlapping domain decomposition methods for dual total variation based image denoising. J. Sci. Comp. 62, 456–481 (2015)

  19. Hintermüller, M., Langer, A.: Subspace correction methods for a class of non-smooth and non-additive convex variational problems with mixed \(L^1/L^2\) data-fidelity in image processing. SIAM J. Imaging Sci. 6, 2134–2173 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  20. Kang, M., Yun, S., Woo, H.: Two-level convex relaxed variational model for multiplicative denoising. SIAM J. Imaging Sci. 6, 875–903 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  21. Langer, A., Osher, S., Schönlieb, C.-B.: Bregmanized domain decomposition for image restoration. J. Sci. Comput. 54, 549–576 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  22. Rudin, L., Osher, S., Fatemi, E.: Nonlinear total variation based noise removal algorithms. Phys. D 60, 259–268 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  23. Tseng, P.: Convergence of a block coordinate descent method for nondifferentiable minimization. J. Optim. Theory Appl. 109, 475–494 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  24. Tseng, P., Yun, S.: A coordinate gradient descent method for nonsmooth separable minimization. Math. Prog. (Ser. B) 117, 387–423 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  25. Wang, W., Ng, M.K.: A variational approach for image stitching I. SIAM J. Imaging Sci. 6, 1318–1344 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  26. Wen, Z., Yin, W., Zhang, Y.: Solving a low-rank factorization model for matrix completion by a nonlinear successive over-relaxation algorithm. Math. Program. Comput. 4, 333–361 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  27. Woo, H., Yun, S.: Proximal linearized alternating direction method for multiplicative denoising. SIAM J. Sci. Comput. 35, B336–B358 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  28. Wu, C., Tai, X.-C.: Augmented Lagrangian method, Dual methods and Split-Bregman Iterations for ROF, vectorial TV and higher order models. SIAM J. Imaging Sci. 3, 300–339 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  29. Xu, J., Tai, X.-C., Wang, L.-L.: A two-level domain decomposition method for image restoration. Inverse Probl. Imaging 4, 523–545 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  30. Yun, S., Woo, H.: Linearized proximal alternating minimization algorithm for motion deblurring by nonlocal regularization. Pattern Recognit. 44, 1312–1326 (2011)

    Article  MATH  Google Scholar 

  31. Zhang, X., Burger, M., Bresson, X., Osher, S.: Bregmanized nonlocal regularization for deconvolution and sparse reconstruction. SIAM J. Imaging Sci. 3, 253–276 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  32. Zhang, X., Burger, M., Osher, S.: A unified primal-dual algorithm framework based on Bregman iteration. J. Sci. Comput. 46, 20–46 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  33. Zhu, M., Chan, T.: An efficient primal-dual hybrid gradient algorithm for total variation image restoration. UCLA CAM-report (2008)

Download references

Acknowledgments

The authors would like to thank the anonymous referees for their detailed comments to improve this paper. Chang-Ock Lee was supported by the National Research Foundation of Korea (NRF-2011-0015399). Hyenkyun Woo was supported by the New Professor Research Program of Koreatech and Basic Science Program through the NRF of Korea funded by the Ministry of Education (NRF-2015R101A1A01061261). Sangwoon Yun was supported by the TJ Park Science Fellowship of POSCO TJ Park Foundation and Basic Science Research Program through NRF funded by the Ministry of Science, ICT & Future Planning (2012R1A1A1006406, 2014R1A1A2056038).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hyenkyun Woo.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lee, CO., Lee, J.H., Woo, H. et al. Block Decomposition Methods for Total Variation by Primal–Dual Stitching. J Sci Comput 68, 273–302 (2016). https://doi.org/10.1007/s10915-015-0138-9

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10915-015-0138-9

Keywords

Mathematics Subject Classification

Navigation