Skip to main content
Log in

Multi-Dimensional Image Recovery via Fully-Connected Tensor Network Decomposition Under the Learnable Transforms

  • Published:
Journal of Scientific Computing Aims and scope Submit manuscript

Abstract

Multi-dimensional image recovery from incomplete data is a fundamental problem in data processing. Due to its advantage of capturing the correlations between any modes of the multi-dimensional image, i.e., the target tensor, the fully-connected tensor network (FCTN) decomposition has recently shown promising performance on multi-dimensional image recovery. However, FCTN decomposition suffers from computational deficiency, especially for large-scale multi-dimensional images. To address this deficiency, we propose a learnable transform-based FCTN model (termed as T-FCTN), which enjoys the remarkable advantage of FCTN decomposition with cheap computational cost. More concretely, we learn the semi-orthogonal transforms along each mode of the target tensor to project the large-scale tensor \({\mathcal {X}}\) \(\in \) \({\mathbb {R}}^{I\times {I}\times {\cdots }\times {I}}\) into a small-scale essential tensor \({\mathcal {E}}\) \(\in \) \({\mathbb {R}}^{r\times {r}\times {\cdots }\times {r}}\), and then apply FCTN decomposition on the small-scale essential tensor. To tackle the proposed model, we develop an efficient proximal alternating minimization (PAM)-based algorithm with theoretical convergence guarantee. Moreover, the computational complexity of PAM for T-FCTN is \({\mathcal {O}}{(N\sum _{k=2}^N{r^k}{R^{k(N-k)+k-1}}}+{N{r^{N-1}}R^{2(N-1)}+N{R}^{3(N-1)}+N{\sum _{k=1}^N{{r^k}{I}^{N-k+1}}})}\) at each iteration, which is significantly lower than \({\mathcal {O}}{(N\sum _{k=2}^N{I^k}{R^{k(N-k)+k-1}}}+N{I^{N-1}}R^{2(N-1)}+{N{R}^{3(N-1)})}\) of PAM for FCTN when \(r\ll I\). Extensive numerical experiments on color videos and light field images illustrate the superiority of the proposed method over other state-of-the-art methods in terms of quality metrics, visual quality, and running time.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Data Availability

Enquiries about data availability should be directed to the authors.

Notes

  1. TT has been widely used in physics [29,30,31] and is known as matrix product states (MPS) with periodic boundary conditions [32].

  2. TR is also called MPS with periodic boundary conditions in physics [32].

  3. FCTN is also called complete graph tensor network states (CTNS) in physics [39].

  4. The data is available at http://www.brl.ntt.co.jp/people/akisato/saliency3.html and http://trace.eas.asu.edu/yuv/.

  5. The data is available at http://hci-lightfield.iwr.uni-heidelberg.de.

References

  1. Che, M., Wei, Y., Yan, H.: An efficient randomized algorithm for computing the approximate tucker decomposition. J. Sci. Comput. (2021). https://doi.org/10.1007/s10915-021-01545-510.1007

    Article  MathSciNet  MATH  Google Scholar 

  2. Li, J.-F., Li, W., Vong, S.-W., Luo, Q.-L., Xiao, M.: A Riemannian optimization approach for solving the generalized eigenvalue problem for nonsquare matrix pencils. J. Sci. Comput. (2020). https://doi.org/10.1007/s10915-020-01173-5

    Article  MathSciNet  MATH  Google Scholar 

  3. Wang, Y., Yin, W., Zeng, J.: Global convergence of ADMM in nonconvex nonsmooth optimization. J. Sci. Comput. 78, 29–63 (2019)

    Article  MathSciNet  Google Scholar 

  4. Zhao X, Bai M, Ng MK: Nonconvex optimization for robust tensor completion from grossly sparse observations. J. Sci. Comput. 85(46) (2020)

  5. Li, M., Li, W., Chen, Y., Xiao, M.: The nonconvex tensor robust principal component analysis approximation model via the weighted \(\ell _p\)-norm regularization. J. Sci. Comput. (2021). https://doi.org/10.1007/s10915-021-01679-6

    Article  MathSciNet  Google Scholar 

  6. Liu, J., Musialski, P., Wonka, P., Ye, J.: Tensor completion for estimating missing values in visual data. IEEE Trans. Pattern Anal. Mach. Intell. 35(1), 208–220 (2013)

    Article  Google Scholar 

  7. Wang, A., Zhou, G., Zhao, Q.: Guaranteed robust tensor completion via *L-SVD with applications to remote sensing data. Remote Sens. 13(18), e3671 (2021)

    Article  Google Scholar 

  8. Zhang, H., Zhao, X.-L., Jiang, T.-X., Ng, M.K., Huang, T.-Z.: Multiscale feature tensor train rank minimization for multidimensional image recovery. IEEE Trans. Cybern. (2021). https://doi.org/10.1109/TCYB.2021.3108847

    Article  Google Scholar 

  9. Zhang, X., Ng, M.K.: Low rank tensor completion with Poisson observations. IEEE Trans. Pattern Anal. Mach. Intell. (2021). https://doi.org/10.1109/TPAMI.2021.3059299

    Article  Google Scholar 

  10. Chen, Z., Zhou, G., Zhao, Q.: Hierarchical factorization strategy for high-order tensor and application to data completion. IEEE Signal Process. Lett. 28, 1255–1259 (2021)

    Article  Google Scholar 

  11. Hou, J., Zhang, F., Qiu, H., Wang, J., Wang, Y., Meng, D.: Robust low-tubal-rank tensor recovery from binary measurements. IEEE Trans. Pattern Anal. Mach. Intell. 44(8), 4355–4373 (2021)

    Google Scholar 

  12. Zhao, X.-L., Yang, J.-H., Ma, T.-H., Jiang, T.-X., Ng, M.K., Huang, T.-Z.: Tensor completion via complementary global, local, and nonlocal priors. IEEE Trans. Image Process. 31, 984–999 (2022)

    Article  Google Scholar 

  13. Kolda, T.G., Bader, B.W.: Tensor decompositions and applications. SIAM Rev. 51(3), 455–500 (2009)

    Article  MathSciNet  Google Scholar 

  14. De Lathauwer, L., De Moor, B., Vandewalle, J.: A multilinear singular value decomposition. SIAM J. Matrix Anal. Appl. 21(4), 1253–1278 (2000)

    Article  MathSciNet  Google Scholar 

  15. Xu, Y., Hao, R., Yin, W., Su, Z.: Parallel matrix factorization for low-rank tensor completion. Inverse Probl. Imaging 9(2), 601–624 (2015)

    Article  MathSciNet  Google Scholar 

  16. Zhao, Q., Zhang, L., Cichocki, A.: Bayesian CP factorization of incomplete tensors with automatic rank determination. IEEE Trans. Pattern Anal. Mach. Intell. 37(9), 1751–1763 (2015)

    Article  Google Scholar 

  17. Fu, X., Ibrahim, S., Wai, H.-T., Gao, C., Huang, K.: Block-randomized stochastic proximal gradient for low-rank tensor factorization. IEEE Trans. Signal Process. 68, 2170–2185 (2020)

    Article  MathSciNet  Google Scholar 

  18. Kilmer, M.E., Martin, C.D.: Factorization strategies for third-order tensors. Linear Algebra Appl. 435(3), 641–658 (2011)

    Article  MathSciNet  Google Scholar 

  19. Kilmer, M.E., Braman, K., Hao, N., Hoover, R.C. Third-order tensors as operators on matrices: a theoretical and computational framework with applications in imaging. SIAM J. Matrix Anal. Appl., 34 (1) (2013)

  20. Kilmer, M.E., Horesh, L., Avron, H., Newman, E.: Tensor-tensor algebra for optimal representation and compression of multiway data. Proc. Natl. Acad. Sci. 118(28), e2015851118 (2021)

    Article  MathSciNet  Google Scholar 

  21. Song, G., Ng, M.K., Zhang, X.: Robust tensor completion using transformed tensor singular value decomposition. Numer. Linear Algebra Appl. 27(3), e2299 (2020)

    Article  MathSciNet  Google Scholar 

  22. Kernfeld, E., Kilmer, M.E., Aeron, S.: Tensor-tensor products with invertible linear transforms. Linear Algebra Appl. 485, 545–570 (2015)

    Article  MathSciNet  Google Scholar 

  23. Lu, C., Peng, X., Wei, Y. : Low-rank tensor completion with a new tensor nuclear norm induced by invertible linear transforms. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5989–5997 (2019)

  24. Li, B.-Z., Zhao, X.-L., Ji, T.-Y., Zhang, X.-J., Huang, T.-Z. Nonlinear transform induced tensor nuclear norm for tensor completion. J. Sci. Comput., 92(3) (2022)

  25. Jiang, T.-X., Ng, M.K., Zhao, X.-L., Huang, T.-Z.: Framelet representation of tensor nuclear norm for third-order tensor completion. IEEE Trans. Image Process. 29, 7233–7244 (2020)

    Article  MathSciNet  Google Scholar 

  26. Kong, H., Lu, C., Lin, Z.: Tensor Q-rank: new data dependent definition of tensor rank. Mach. Learn. 110, 1867–1900 (2021)

    Article  MathSciNet  Google Scholar 

  27. Luo, Y.-S., Zhao, X.-L., Jiang, T.-X., Chang, Y., Ng, M.K., Li, C.: Self-supervised nonlinear transform-based tensor nuclear norm for multi-dimensional image recovery. IEEE Trans. Image Process. 31, 3793–3808 (2022)

    Article  Google Scholar 

  28. Qin, W., Wang, H., Zhang, F., Wang, J., Luo, X., Huang, T.: Low-rank high-order tensor completion with applications in visual data. IEEE Trans. Image Process. 31, 2433–2448 (2022)

    Article  Google Scholar 

  29. Anderson, P.W.: New approach to the theory of superexchange interactions. Phys. Rev. 115, 2–13 (1959)

    Article  MathSciNet  Google Scholar 

  30. White, S.R.: Density matrix formulation for quantum renormalization groups. Phys. Rev. Lett. 69, 2863–2866 (1992)

    Article  Google Scholar 

  31. White, S.R., Huse, D.A.: Numerical renormalization-group study of low-lying eigenstates of the antiferromagnetic S=1 Heisenberg chain. Phys. Rev. B 48, 3844–3852 (1993)

    Article  Google Scholar 

  32. Orús, R.: A practical introduction to tensor networks: matrix product states and projected entangled pair states. Ann. Phys. 349, 117–158 (2014)

    Article  MathSciNet  Google Scholar 

  33. Oseledets, I.V.: Tensor-train decomposition. SIAM J. Sci. Comput. 33(5), 2295–2317 (2011)

    Article  MathSciNet  Google Scholar 

  34. Zhao, Q., Zhou, G., Xie, S., Zhang, L., Cichocki, A.: Tensor ring decomposition. arxiv:1606.05535 (2016)

  35. Bengua, J.A., Phien, H.N., Tuan, H.D., Do, M.N.: Efficient tensor completion for color image and video recovery: low-rank tensor train. IEEE Trans. Image Process. 26(5), 2466–2479 (2017)

    Article  MathSciNet  Google Scholar 

  36. Chen, C., Wu, Z.-B., Chen, Z.-T., Zheng, Z.-B., Zhang, X.-J.: Auto-weighted robust low-rank tensor completion via tensor-train. Inf. Sci. 567, 100–115 (2021)

    Article  MathSciNet  Google Scholar 

  37. Yuan, L., Li, C., Mandic, D., Cao, J., Zhao, Q.: Tensor ring decomposition with rank minimization on latent space: an efficient approach for tensor completion. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33(1), pp. 9151–9158 (2019)

  38. Yu, J., Zhou, G., Sun, W., Xie, S. : Robust to rank selection: low-rank sparse tensor-ring completion. IEEE Trans. Neural Netw. Learn. Syst., pp. 1–15 (2021)

  39. Marti, K.H., Bauer, B., Reiher, M., Troyer, M., Verstraete, F.: Complete-graph tensor network states: a new fermionic wave function ansatz for molecules. New J. Phys. 12(10), e103008 (2010)

    Article  Google Scholar 

  40. Zheng, Y.-B., Huang, T.-Z., Zhao, X.-L., Zhao, Q., Jiang, T.-X.: Fully-connected tensor network decomposition and its application to higher-order tensor completion. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol.bf 35, no. 12, pp. 11 071–11 078 (2021)

  41. Zheng, Y.-B., Huang, T.-Z., Zhao, X.-L., Zhao, Q.: Tensor completion via fully-connected tensor network decomposition with regularized factors. J. Sci. Comput. 92, 1–35 (2022)

    Article  MathSciNet  Google Scholar 

  42. Silva, V.D., Lim, L.-H.: Tensor rank and the Ill-Posedness of the best low-rank approximation problem. SIAM J. Matrix Anal. Appl. 30(3), 1084–1127 (2008)

    Article  MathSciNet  Google Scholar 

  43. Krishnan, D., Fergus, R.: Fast image deconvolution using Hyper–Laplacian Priors. In: Proceedings of the International Conference on Neural Information Processing Systems, pp. 1033–1041, (2009)

  44. Attouch, H., Bolte, J., Redont, P., Soubeyran, A.: Proximal alternating minimization and projection methods for nonconvex problems: an approach based on the Kurdyka–Łojasiewicz Inequality. Math. Oper. Res. 35(2), 438–457 (2010)

    Article  MathSciNet  Google Scholar 

  45. Attouch, H., Bolte, J., Svaiter, B.F.: Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized Gauss–Seidel methods. Math. Program. 137, 91–129 (2013)

    Article  MathSciNet  Google Scholar 

  46. Bolte, J., Daniilidis, A., Lewis, A., Shiota, M.: Clarke subgradients of stratifiable functions. SIAM J. Optim. 18(2), 556–572 (2007)

    Article  MathSciNet  Google Scholar 

  47. Bolte, J., Sabach, S., Teboulle, M.: Proximal alternating linearized minimization for nonconvex and nonsmooth problems. Math. Program. 146, 459–494 (2014)

    Article  MathSciNet  Google Scholar 

  48. Xie, Q., Zhao, Q., Meng, D., Xu, Z.: Kronecker-Basis-representation based tensor sparsity and its applications to tensor recovery. IEEE Trans. Pattern Anal. Mach. Intell. 40(8), 1888–1902 (2018)

    Article  Google Scholar 

  49. Yair, N., Michaeli, T. Multi-scale weighted nuclear norm image restoration. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 3165–3174 (2018)

  50. Wang, Z., Bovik, A., Sheikh, H., Simoncelli, E.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xi-Le Zhao.

Ethics declarations

Conflicts of interest

The authors have not disclosed any competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This research is supported by NSFC (Nos. 61876203, 12171072), the Applied Basic Research Project of Sichuan Province (No. 2021YJ0107), the Key Project of Applied Basic Research in Sichuan Province (No. 2020YJ0216), and National Key Research and Development Program of China (No. 2020YFA0714001).

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lyu, CY., Zhao, XL., Li, BZ. et al. Multi-Dimensional Image Recovery via Fully-Connected Tensor Network Decomposition Under the Learnable Transforms. J Sci Comput 93, 49 (2022). https://doi.org/10.1007/s10915-022-02009-0

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10915-022-02009-0

Keywords