Skip to main content
Log in

Poisson Tensor Completion via Nonconvex Regularization and Nonlocal Self-Similarity for Multi-dimensional Image Recovery

  • Published:
Journal of Scientific Computing Aims and scope Submit manuscript

Abstract

The problem of Poisson tensor completion aims to recover a tensor from partial observations in the presence of Poisson noise. Existing approaches utilized the transformed tensor nuclear norm to explore the low-rankness of a tensor, which is the \(\ell _1\) norm of singular values vectors of all frontal slices of a tensor in the transformed domain. Nevertheless, the \(\ell _1\) norm is suboptimal due to its biased estimate. In this paper, we propose a nonconvex model based on transformed tensor nuclear norm for Poisson tensor completion. In order to explore the global low-rankness of the underlying tensor, a family of nonconvex functions are employed onto the singular values of all frontal slices of a tensor in the transformed domain. Furthermore, the nonlocal self-similarity is incorporated into the nonconvex model to describe the similar structures and characterize the intrinsic details of multi-dimensional images. A proximal alternating minimization algorithm is developed to solve the resulting models, whose convergence is established under very mild conditions. Extensive numerical examples on hyperspectral images, video images, and fluorescence microscope images demonstrate that the proposed approach outperforms several state-of-the-art methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Algorithm 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Data Availability

Enquiries about data availability should be directed to the authors.

Notes

  1. The global low-rankness means to explore the low-rankness of the underlying tensor directly.

  2. https://www.cs.rochester.edu/u/jliu/publications.html

  3. https://xu-yangyang.github.io/TMac/

  4. https://github.com/Shiran-Yuan/SPTC

  5. https://xj-zhang.github.io/math/publications.html

  6. https://personalpages.manchester.ac.uk/staff/d.h.foster//Hyperspectral_images_of_natural_scenes_04.html

  7. https://media.xiph.org/video/derf/

  8. http://www.cellimagelibrary.org/images/35532

References

  1. Luisier, F., Blu, T., Unser, M.: Image denoising in mixed Poisson–Gaussian noise. IEEE Trans. Image Process. 20(3), 696–708 (2010)

    MathSciNet  MATH  Google Scholar 

  2. Zhang, Y., Zhu, Y., Nichols, E., Wang, Q., Zhang, S., Smith, C., Howard, S.: A Poisson-Gaussian denoising dataset with real fluorescence microscopy images. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11710–11718 (2019)

  3. McRae, A.D., Davenport, M.A.: Low-rank matrix completion and denoising under Poisson noise. Inform. Inference: J. IMA 10(2), 697–720 (2021)

    MathSciNet  MATH  Google Scholar 

  4. Soni, A., Jain, S., Haupt, J., Gonella, S.: Noisy matrix completion under sparse factor models. IEEE Trans. Inform. Theory 62(6), 3636–3661 (2016)

    MathSciNet  MATH  Google Scholar 

  5. Cao, Y., Xie, Y.: Poisson matrix recovery and completion. IEEE Trans. Signal Process. 64(6), 1609–1620 (2016)

    MathSciNet  MATH  Google Scholar 

  6. Hitchcock, F.L.: The expression of a tensor or a polyadic as a sum of products. J. Math. Phys. 6(1–4), 164–189 (1927)

    MATH  Google Scholar 

  7. Tucker, L.R.: Some mathematical notes on three-mode factor analysis. Psychometrika 31(3), 279–311 (1966)

    MathSciNet  MATH  Google Scholar 

  8. Kilmer, M.E., Martin, C.D.: Factorization strategies for third-order tensors. Linear Algebra Appl. 435(3), 641–658 (2011)

    MathSciNet  MATH  Google Scholar 

  9. Oseledets, I.V.: Tensor-train decomposition. SIAM J. Sci. Comput. 33(5), 2295–2317 (2011)

    MathSciNet  MATH  Google Scholar 

  10. Zhao, Q., Zhou, G., Xie, S., Zhang, L., Cichocki, A.: Tensor ring decomposition. arXiv:1606.05535 (2016)

  11. Zheng, Y.-B., Huang, T.-Z., Zhao, X.-L., Zhao, Q., Jiang, T.-X.: Fully-connected tensor network decomposition and its application to higher-order tensor completion. In: Proceedings of the AAAI Conference on Artificial Intelligence vol. 35, pp. 11071–11078 (2021)

  12. Hong, D., Kolda, T.G., Duersch, J.A.: Generalized canonical polyadic tensor decomposition. SIAM Rev. 62(1), 133–163 (2020)

    MathSciNet  MATH  Google Scholar 

  13. Hillar, C.J., Lim, L.-H.: Most tensor problems are NP-hard. J. ACM 60(6), 45 (2013)

    MathSciNet  MATH  Google Scholar 

  14. Liu, J., Musialski, P., Wonka, P., Ye, J.: Tensor completion for estimating missing values in visual data. IEEE Trans. Pattern Anal. Mach. Intell. 35(1), 208–220 (2013)

    MATH  Google Scholar 

  15. Mu, C., Huang, B., Wright, J., Goldfarb, D.: Square deal: Lower bounds and improved relaxations for tensor recovery. In: International Conference on Machine Learning, pp. 73–81. PMLR (2014)

  16. Bengua, J.A., Phien, H.N., Tuan, H.D., Do, M.N.: Efficient tensor completion for color image and video recovery: low-rank tensor train. IEEE Trans. Image Process. 26(5), 2466–2479 (2017)

    MathSciNet  MATH  Google Scholar 

  17. Ding, M., Huang, T.-Z., Ji, T.-Y., Zhao, X.-L., Yang, J.-H.: Low-rank tensor completion using matrix factorization based on tensor train rank and total variation. J. Sci. Comput. 81, 941–964 (2019)

    MathSciNet  MATH  Google Scholar 

  18. Qiu, Y., Zhou, G., Zhao, Q., Xie, S.: Noisy tensor completion via low-rank tensor ring. IEEE Trans. Neural Netw. Learn. Syst. 35(1), 1127–1141 (2024)

    MathSciNet  MATH  Google Scholar 

  19. Zheng, Y.-B., Huang, T.-Z., Zhao, X.-L., Zhao, Q.: Tensor completion via fully-connected tensor network decomposition with regularized factors. J. Sci. Comput. 92(1), 8 (2022)

    MathSciNet  MATH  Google Scholar 

  20. Martin, C.D., Shafer, R., LaRue, B.: An order-p tensor factorization with applications in imaging. SIAM J. Sci. Comput. 35(1), A474–A490 (2013)

    MathSciNet  MATH  Google Scholar 

  21. Qin, W., Wang, H., Zhang, F., Wang, J., Luo, X., Huang, T.: Low-rank high-order tensor completion with applications in visual data. IEEE Trans. Image Process. 31, 2433–2448 (2022)

    MATH  Google Scholar 

  22. Zhang, Z., Ely, G., Aeron, S., Hao, N., Kilmer, M.: Novel methods for multilinear data completion and de-noising based on tensor-SVD. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3842–3849 (2014)

  23. Zhao, X., Bai, M., Ng, M.K.: Nonconvex optimization for robust tensor completion from grossly sparse observations. J. Sci. Comput. 85(2), 46 (2020)

    MathSciNet  MATH  Google Scholar 

  24. Lu, C., Feng, J., Chen, Y., Liu, W., Lin, Z., Yan, S.: Tensor robust principal component analysis with a new tensor nuclear norm. IEEE Trans. Pattern Anal. Mach. Intell. 42(4), 925–938 (2020)

    MATH  Google Scholar 

  25. Wang, H., Zhang, F., Wang, J., Huang, T., Huang, J., Liu, X.: Generalized nonconvex approach for low-tubal-rank tensor recovery. IEEE Trans. Neural Netw. Learn. Syst. 33(8), 3305–3319 (2022)

    MathSciNet  MATH  Google Scholar 

  26. Song, G., Ng, M.K., Zhang, X.: Robust tensor completion using transformed tensor singular value decomposition. Numer. Linear Algebra Appl. 27(3), e2299 (2020)

    MathSciNet  MATH  Google Scholar 

  27. Qiu, D., Bai, M., Ng, M.K., Zhang, X.: Robust low transformed multi-rank tensor methods for image alignment. J. Sci. Comput. 87(1), 24 (2021)

    MathSciNet  MATH  Google Scholar 

  28. Zhang, X., Wu, J., Ng, M.K.: Multilinear multitask learning by transformed tensor singular value decomposition. Mach. Learn. Appl. 13, 100479 (2023)

    MATH  Google Scholar 

  29. Qiu, D., Bai, M., Ng, M.K., Zhang, X.: Nonlocal robust tensor recovery with nonconvex regularization. Inverse Probl. 37(3), 035001 (2021)

    MathSciNet  MATH  Google Scholar 

  30. Zhang, X., Ng, M.K.: Low rank tensor completion with Poisson observations. IEEE Trans. Pattern Anal. Mach. Intell. 44(8), 4239–4251 (2022)

    MATH  Google Scholar 

  31. Feng, Q., Hou, J., Kong, W., Xu, C., Wang, J.: Poisson tensor completion with transformed correlated total variation regularization. Pattern Recognit. 156, 110735 (2024)

    MATH  Google Scholar 

  32. Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. J. Am. Stat. Assoc. 96(456), 1348–1360 (2001)

    MathSciNet  MATH  Google Scholar 

  33. Gao, K., Huang, Z.-H.: Tensor robust principal component analysis via tensor fibered rank and \(\ell _p\) minimization. SIAM J. Imaging Sci. 16(1), 423–460 (2023)

    MathSciNet  MATH  Google Scholar 

  34. Kong, H., Xie, X., Lin, Z.: t-Schatten-\(p\) norm for low-rank tensor recovery. IEEE J. Sel. Topics Signal Process. 12(6), 1405–1419 (2018)

    MATH  Google Scholar 

  35. Qiu, D., Yang, B., Zhang, X.: Robust tensor completion via dictionary learning and generalized nonconvex regularization for visual data recovery. IEEE Trans. Circuits Syst. Video Technol. 34(11), 11026–11039 (2024)

  36. He, H., Ling, C., Xie, W.: Tensor completion via a generalized transformed tensor t-product decomposition without t-SVD. J. Sci. Comput. 93(2), 47 (2022)

    MathSciNet  MATH  Google Scholar 

  37. Zhang, X., Ng, M.K.: Sparse nonnegative tensor factorization and completion with noisy observations. IEEE Trans. Inform. Theory 68(4), 2551–2572 (2022)

    MathSciNet  MATH  Google Scholar 

  38. Zhang, X., Ng, M.K.: A corrected tensor nuclear norm minimization method for noisy low-rank tensor completion. SIAM J. Imaging Sci. 12(2), 1231–1273 (2019)

    MathSciNet  MATH  Google Scholar 

  39. Bolte, J., Sabach, S., Teboulle, M.: Proximal alternating linearized minimization for nonconvex and nonsmooth problems. Math. Program. 146(1–2), 459–494 (2014)

    MathSciNet  MATH  Google Scholar 

  40. Kernfeld, E., Kilmer, M., Aeron, S.: Tensor-tensor products with invertible linear transforms. Linear Algebra Appl. 485, 545–570 (2015)

    MathSciNet  MATH  Google Scholar 

  41. Wen, Y., Chan, R., Zeng, T.: Primal-dual algorithms for total variation based image restoration under Poisson noise. Sci. China Math. 59(1), 141–160 (2016)

    MathSciNet  MATH  Google Scholar 

  42. Zhang, C.-H.: Nearly unbiased variable selection under minimax concave penalty. Ann. Stat. 38(2), 894–942 (2010)

    MathSciNet  MATH  Google Scholar 

  43. Marjanovic, G., Solo, V.: On \( \ell _q \) optimization and matrix completion. IEEE Trans. Signal Process. 60(11), 5714–5724 (2012)

    MathSciNet  MATH  Google Scholar 

  44. Zhang, T.: Analysis of multi-stage convex relaxation for sparse regularization. J. Mach. Learn. Res. 11(35), 1081–1107 (2010)

    MathSciNet  MATH  Google Scholar 

  45. Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45. PMLR (2013)

  46. Song, G.-J., Ng, M.K., Zhang, X.: Tensor completion by multi-rank via unitary transformation. Appl. Comput. Harmon. Anal. 65, 348–373 (2023)

    MathSciNet  MATH  Google Scholar 

  47. Dabov, K., Foi, A., Katkovnik, V., Egiazarian, K.: Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007)

    MathSciNet  MATH  Google Scholar 

  48. Attouch, H., Bolte, J., Redont, P., Soubeyran, A.: Proximal alternating minimization and projection methods for nonconvex problems: an approach based on the Kurdyka-Łojasiewicz inequality. Math. Oper. Res. 35(2), 438–457 (2010)

    MathSciNet  MATH  Google Scholar 

  49. Ochs, P., Dosovitskiy, A., Brox, T., Pock, T.: On iteratively reweighted algorithms for nonsmooth nonconvex optimization in computer vision. SIAM J. Imaging Sci. 8(1), 331–372 (2015)

    MathSciNet  MATH  Google Scholar 

  50. Xu, Y., Hao, R., Yin, W., Su, Z.: Parallel matrix factorization for low-rank tensor completion. Inverse Probl. Imaging 9(2), 601–624 (2015)

    MathSciNet  MATH  Google Scholar 

  51. Yuan, S., Huang, K.: A generalizable framework for low-rank tensor completion with numerical priors. Pattern Recognit. 155, 110678 (2024)

    MATH  Google Scholar 

  52. Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)

    MATH  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the anonymous referees for their helpful comments and suggestions, which have improved this paper.

Funding

The research of D. Qiu was supported in part by the National Natural Science Foundation of China under Grant No. 12201473 and the Science Foundation of Wuhan Institute of Technology under Grant No. K202256. The research of B. Li was supported in part by the National Natural Science Foundation of China under Grant No. 62377019 and Self-determined Research Funds of CCNU from the Colleges’ Basic Research under Grant No. CCNU24JC004. The research of X. Zhang was supported in part by the National Natural Science Foundation of China under Grant No. 12171189, Hubei Provincial Natural Science Foundation of China under Grant No. JCZRYB202501474, and Fundamental Research Funds for the Central Universities under Grant No. CCNU24ai002.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiongjun Zhang.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

D. Qiu: The research of this author was supported in part by the National Natural Science Foundation of China under Grant No. 12201473 and the Science Foundation of Wuhan Institute of Technology under Grant No. K202256.

The research of B. Li was supported in part by the National Natural Science Foundation of China under Grant No. 62377019 and Fundamental Research Funds for the Central Universities under Grant No. CCNU24JC004. The research of X. Zhang was supported in part by the National Natural Science Foundation of China under Grant No. 12171189 and Fundamental Research Funds for the Central Universities under Grant No. CCNU24AI002.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Qiu, D., Xia, S., Yang, B. et al. Poisson Tensor Completion via Nonconvex Regularization and Nonlocal Self-Similarity for Multi-dimensional Image Recovery. J Sci Comput 102, 76 (2025). https://doi.org/10.1007/s10915-025-02801-8

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10915-025-02801-8

Keywords

Mathematics Subject Classification