Skip to main content
Log in

An iterative algorithm for third-order tensor multi-rank minimization

  • Published:
Computational Optimization and Applications Aims and scope Submit manuscript

Abstract

Recent work by Kilmer et al. (A third-order generalization of the matrix SVD as a product of third-order tensors, Department of Computer Science, Tufts University, Medford, MA, 2008; Linear Algebra Appl 435(3):641–658, 2011; SIAM J Matrix Anal Appl 34(1):148–172, 2013), and Braman (Linear Algebra Appl 433(7):1241–1253, 2010) on tensor–tensor multiplication opens up a new avenue to study third-order tensors. Based on this new tensor–tensor multiplication and related concepts, some familiar tools of linear algebra can be extended to study third-order tensors. Motivated by this process, in this paper, we consider the multi-rank of a tensor as a sparsity measure and propose a new model, called third-order tensor multi-rank minimization, as an extension of matrix rank minimization. The operator splitting technique and the convex relaxation technique are used to tackle this problem. Based on these two powerful techniques, we propose a simple first-order and easy-to-implement algorithm to solve this problem. The proposed algorithm is shown to be globally convergent under some assumptions. The continuation technique is also applied to improve the numerical performance of the algorithm. Some preliminary numerical results demonstrate the efficiency of the proposed algorithm, and the potential value and applications of the multi-rank and the tensor multi-rank minimization model.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  1. Bazaraa, M.S., Sherali, H.D., Shetty, C.M.: Nonlinear Programming, 3rd edn. Wiley, New York (2006)

    Book  MATH  Google Scholar 

  2. Berry, M.W.: Large-scale sparse singular value decompositions. Int. J. Supercomput. Appl. 6(1), 13–49 (1992)

    Google Scholar 

  3. Bertalmío, M., Sapiro, G., Caselles, V., Ballester, C.: Image Inpainting. Proceedings of SIGGRAPH, New Orleans (2000)

    Book  Google Scholar 

  4. Braman, K.: Third-order tensors as linear operators on a space of matrices. Linear Algebra Appl. 433(7), 1241–1253 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  5. Cai, J.F., Candès, E.J., Shen, Z.W.: A singular value thresholding algorithm for matrix completion. SIAM J. Optim. 20(4), 1956–1982 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  6. Candès, E.J., Recht, B.: Exact matrix completion via convex optimization. Found. Comput. Math. 9(6), 717–772 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  7. Comon, P., et al.: Tensor decompositions: state of the art and applications. In: McWhirter, J., Proudler, I. (eds.) Mathematics in Signal Processing V, pp. 1–24. Oxford University Press, Oxford (2001)

    Google Scholar 

  8. Drineas, P., Kannan, R., Mahoney, M.W.: Fast monte carlo algorithms for matrices II: computing low-rank approximations to a matrix. SIAM J. Comput. 36(1), 158–183 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  9. Fazel, M., Hindi, H., Boyd, S.: A rank minimization heuristic with application to minimum order system approximation. Proceedings of the American Control Conference. (Arlington, VA, June 2001), 6, 4734–4739 (2001)

  10. Fazel, M.: Matrix rank minimization with applications. PhD thesis, Stanford University (2002)

  11. Gandy, S., Recht, B., Yamada, I.: Tensor completion and low-\(n\)-rank tensor recovery via convex optimization. Inv.UD Probl. 27, 025010 (19pp) (2011)

  12. Golub, G.H., Van Loan, C.F.: Matrix Computations, 3rd edn. Johns Hopkins University Press, Baltimore (1996)

    MATH  Google Scholar 

  13. Hale, E.T., Yin, W.T., Zhang, Y.: A fixed-point continuation method for \(l_1\)-regularized minimization with applications to compressed sensing. SIAM J. Optim. 19(3), 1107–1130 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  14. Hao, N., Kilmer, M.E., Braman, K., Hoover, R.C.: Facial recognition using tensor–tensor decomposition. SIAM J. Imaging Sci. 6(1), 437–463 (2013)

    Article  MATH  MathSciNet  Google Scholar 

  15. Håstad, J.: Tensor rank is NP-complete. J. Algorithms 11(4), 644–654 (1990)

    Article  MATH  MathSciNet  Google Scholar 

  16. Kilmer, M.E., Martin, C.D., Perrone, L.: A third-order generalization of the matrix SVD as a product of third-order tensors. Technical Report TR-2008-4, Department of Computer Science, Tufts University, Medford, MA (2008)

  17. Kilmer, M.E., Braman, K., Hao, N., Hoover, R.C.: Third order tensors as operators on matrices: a theoretical and computational framework with applications in imaging. SIAM J. Matrix Anal. Appl. 34(1), 148–172 (2013)

    Article  MATH  MathSciNet  Google Scholar 

  18. Kilmer, M.E., Martin, C.D.: Factorization strategies for third-order tensors. Linear Algebra Appl. 435(3), 641–658 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  19. Kolda, T.G., Bader, B.W.: Tensor decompositions and applications. SIAM Rev. 51(3), 455–500 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  20. Komodakis, N.: Image completion using global optimization. CVPR 1, 442–452 (2006)

    Google Scholar 

  21. Korah, T., Rasmussen, C.: Spatiotemporal inpainting for recovering texture maps of occluded building facades. IEEE Trans. Image Process. 16(9), 2262–2271 (2007)

    Article  MathSciNet  Google Scholar 

  22. Kroonenberg, P.: Three-Mode Principal Component Analysis: Theory and Applications. DSWO Press, Leiden (1983)

    Google Scholar 

  23. Larsen, R.M.: PROPACK-software for large and sparse svd calculations available at http://sun.stanford.edu/srmunk/PROPACK/

  24. Lathauwer, LDe, Moor, B.D.: From matrix to tensor: multilinear algebra and signal processing. In: McWhirter, J., Proudler, E.I. (eds.) Mathematics in Signal Processing IV, pp. 1–15. Clarendon Press, Oxford (1998)

    Google Scholar 

  25. Liu, J., Musialski, P., Wonka, P., Ye, J.P.: Tensor completion for estimating missing values in visual data. IEEE Int. Conf. Computer Vision (ICCV), Kyoto, Japan, pp. 2114–2121 (2009)

  26. Ma, S.Q., Goldfarb, D., Chen, L.F.: Fixed point and Bregman iterative methods for matrix rank minimization. Math. Program. 128(1–2), 321–353 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  27. Recht, B., Fazel, M., Parrilo, P.A.: Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM Rev. 52(3), 471–501 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  28. Recht, B.: A simpler approach to matrix completion. J. Mach. Learn. Res. 12, 3413–3430 (2011)

    MATH  MathSciNet  Google Scholar 

  29. Rockafellar, R.T.: Convex Analysis. Princeton University Press, Princeton (1970)

    Book  MATH  Google Scholar 

  30. Semerci, O., Hao, N., Kilmer, M.E., Miller, E.L.: Tensor-based formulation and nuclear norm regularization for multienergy computed tomography. IEEE Trans. Image Process. 23(4), 1678–1693 (2014)

    Article  MathSciNet  Google Scholar 

  31. Signoretto, M., Lathauwer L.De., Suykens J.A.K.: Nuclear norms for tensors and their use for convex multilinear estimation, Preprint, submitted to Linear Algebra Appl., September (2010)

  32. Smilde, A., Bro, R., Geladi, P.: Multi-way Analysis: Applications in the Chemical Sciences. Wiley, New York (2004)

    Book  Google Scholar 

  33. Toh, K.C., Yun, S.W.: An accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems. Pac. J. Optim. 6, 615–640 (2010)

    MATH  MathSciNet  Google Scholar 

  34. Tomioka, R., Hayashi, K., Kashima, H.: Estimation of low-rank tensors via convex optimization (2011). arXiv:1010.0789v2

  35. Tseng, P.: Convergence of block coordinate descent method for nondifferentiable minimization. J. Optim. Theory Appl. 109(3), 475–494 (2001)

    Article  MATH  MathSciNet  Google Scholar 

  36. Watson, G.A.: Characterization of the subdifferential of some matrix norms. Linear Algebra Appl. 170(1), 33–45 (1992)

    Article  MATH  MathSciNet  Google Scholar 

  37. Zhang, Z.M., Ely, G., Aeron, S., Hao, N., Kilmer, M.E.: Novel methods for multilinear data completion and de-noising based on tensor-SVD. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2014)

Download references

Acknowledgments

The authors are very grateful to the editor and the two anonymous referees for their valuable suggestions and comments, which have considerably improve the presentation of this paper. We would like to thank Silvia Gandy for sending us the codes of ADM-TR(E) and DR-TR. This work was partially supported by National Nature Science Foundation of China (Nos. 11171252, 11431002 and 11201332).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zheng-Hai Huang.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yang, L., Huang, ZH., Hu, S. et al. An iterative algorithm for third-order tensor multi-rank minimization. Comput Optim Appl 63, 169–202 (2016). https://doi.org/10.1007/s10589-015-9769-x

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10589-015-9769-x

Keywords

Navigation