Skip to main content

Advertisement

Log in

Regularized and Structured Tensor Total Least Squares Methods with Applications

  • Published:
Journal of Optimization Theory and Applications Aims and scope Submit manuscript

Abstract

Total least squares (TLS), also named as errors in variables in statistical analysis, is an effective method for solving linear equations with the situations, when noise is not just in observation data but also in mapping operations. Besides, the Tikhonov regularization is widely considered in plenty of ill-posed problems. Moreover, the structure of mapping operator plays a crucial role in solving the TLS problem. Tensor operators have some advantages over the characterization of models, which requires us to build the corresponding theory on the tensor TLS. This paper proposes tensor regularized TLS and structured tensor TLS methods for solving ill-conditioned and structured tensor equations, respectively, adopting a tensor-tensor-product. Properties and algorithms for the solution of these approaches are also presented and proved. Based on this method, some applications in image and video deblurring are explored. Numerical examples illustrate the effectiveness of our methods, compared with some existing methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

Data Availibility

All datasets are publicly available.

References

  1. Badeau, R., Boyer, R.: Fast multilinear singular value decomposition for structured tensors. SIAM J. Matrix Anal. Appl. 30, 1008–1021 (2008)

    MathSciNet  Google Scholar 

  2. Beck, A., Ben-Tal, A.: A global solution for the structured total least squares problem with block circulant matrices. SIAM J. Matrix Anal. Appl. 27, 238–255 (2005)

    MathSciNet  Google Scholar 

  3. Beck, A., Ben-Tal, A.: On the solution of the Tikhonov regularization of the total least squares problem. SIAM J. Optim. 17, 98–118 (2006)

    MathSciNet  Google Scholar 

  4. Beik, F.P.A., El Ichi, A., Jbilou, K., Sadaka, R.: Tensor extrapolation methods with applications. Numer. Algorithms 87, 1421–1444 (2021)

    MathSciNet  Google Scholar 

  5. Beik, F.P.A., Jbilou, K., Najafi-Kalyani, M., Reichel, L.: Golub-Kahan bidiagonalization for ill-conditioned tensor equations with applications. Numer. Algorithms 84, 1535–1563 (2020)

    MathSciNet  Google Scholar 

  6. Bentbib, A.H., Hachimi, A.E., Jbilou, K., Ratnani, A.: A tensor regularized nuclear norm method for image and video completion. J. Optim. Theory Appl. 192, 401–425 (2022)

    MathSciNet  Google Scholar 

  7. Betcke, T., Higham, N.J., Mehrmann, V., Schröder, C., Tisseur, F.: NLEVP: a collection of nonlinear eigenvalue problems. ACM Trans. Math. Softw. TOMS 39, 1–28 (2013)

    MathSciNet  Google Scholar 

  8. Cai, Y., Zhang, L.-H., Bai, Z., Li, R.-C.: On an eigenvector-dependent nonlinear eigenvalue problem. SIAM J. Matrix Anal. Appl. 39, 1360–1382 (2018)

    MathSciNet  Google Scholar 

  9. Che, M., Wang, X., Wei, Y., Zhao, X.: Fast randomized tensor singular value thresholding for low-rank tensor optimization. Numer. Linear Algebra Appl. 29, e24441 (2022)

    MathSciNet  Google Scholar 

  10. Che, M., Wei, Y.: An efficient algorithm for computing the approximate t-URV and its applications. J. Sci. Comput. 92, 93 (2022)

    MathSciNet  Google Scholar 

  11. Chen, X., Qin, J.: Regularized Kaczmarz algorithms for tensor recovery. SIAM J. Imaging Sci. 14, 1439–1471 (2021)

    MathSciNet  Google Scholar 

  12. De Lathauwer, L., de Baynast, A.: Blind deconvolution of DS-CDMA signals by means of decomposition in rank-(1, l, l) terms. IEEE Trans. Signal Process. 56, 1562–1571 (2008)

    MathSciNet  Google Scholar 

  13. De Moor, B.: Structured total least squares and \(l_2\) approximation problems. Linear Algebra Appl. 188(189), 163–207 (1993)

    MathSciNet  Google Scholar 

  14. Doicu, A., Trautmann, T., Schreier, F.: Numerical Regularization for Atmospheric Inverse Problems. Springer Science & Business Media, Berlin (2010)

    Google Scholar 

  15. El Guide, M., El Ichi, A., Jbilou, K., Sadaka, R.: On tensor GMRES and Golub-Kahan methods via the t-product for color image processing. Electron. J. Linear Algebra 37, 524–543 (2021)

    MathSciNet  Google Scholar 

  16. El Guide, M., Jbilou, K., Ratnani, A.: RBF approximation of three dimensional PDEs using tensor Krylov subspace methods. Eng. Anal. Bound. Elem. 139, 77–85 (2022)

    MathSciNet  Google Scholar 

  17. Fierro, R.D., Golub, G.H., Hansen, P.C., O’Leary, D.P.: Regularization by truncated total least squares. SIAM J. Sci. Comput. 18, 1223–1241 (1997)

    MathSciNet  Google Scholar 

  18. Gazagnadou, N., Ibrahim, M., Gower, R.M.: RidgeSketch: a fast sketching based solver for large scale ridge regression. SIAM J. Matrix Anal. Appl. 43, 1440–1468 (2022)

    MathSciNet  Google Scholar 

  19. Golub, G.H., Hansen, P.C., O’Leary, D.P.: Tikhonov regularization and total least squares. SIAM J. Matrix Anal. Appl. 21, 185–194 (1999)

    MathSciNet  Google Scholar 

  20. Golub, G.H., Van Loan, C.F.: An analysis of the total least squares problem. SIAM J. Numer. Anal. 17, 883–893 (1980)

    MathSciNet  Google Scholar 

  21. Golub, G.H., Van Loan, C.F.: Matrix Computations, 4th edn. The Johns Hopkins University Press, Baltimore (2013)

    Google Scholar 

  22. Gratton, S., Titley-Peloquin, D., Ilunga, J.T.: Sensitivity and conditioning of the truncated total least squares solution. SIAM J. Matrix Anal. Appl. 34, 1257–1276 (2013)

    MathSciNet  Google Scholar 

  23. Guhaniyogi, R., Qamar, S., Dunson, D.B.: Bayesian tensor regression. J. Mach. Learn. Res. 18, 2733–2763 (2017)

    MathSciNet  Google Scholar 

  24. Guo, H., Renaut, R.A.: A regularized total least squares algorithm. In: Total Least Squares and Errors-in-Variables Modeling, pp. 57–66. Springer, Berlin (2002)

    Google Scholar 

  25. Han, F., Wei, Y.: TLS-EM algorithm of mixture density models for exponential families. J. Comput. Appl. Math. 403, 113829 (2022)

    MathSciNet  Google Scholar 

  26. Hao, N., Kilmer, M.E., Braman, K., Hoover, R.C.: Facial recognition using tensor-tensor decompositions. SIAM J. Imaging Sci. 6, 437–463 (2013)

    MathSciNet  Google Scholar 

  27. Hnětynková, I., Plešinger, M., Žáková, J.: TLS formulation and core reduction for problems with structured right-hand sides. Linear Algebra Appl. 555, 241–265 (2018)

    MathSciNet  Google Scholar 

  28. Hnětynková, I., Plešinger, M., Žáková, J.: On TLS formulation and core reduction for data fitting with generalized models. Linear Algebra Appl. 577, 1–20 (2019)

    MathSciNet  Google Scholar 

  29. Hnětynková, I., Plešinger, M., Žáková, J.: Krylov subspace approach to core problems within multilinear approximation problems: a unifying framework. SIAM J. Matrix Anal. Appl. 44, 53–79 (2023)

    MathSciNet  Google Scholar 

  30. Ichi, A.E., Jbilou, K., Sadaka, R.: On tensor tubal-Krylov subspace methods. Linear Multilinear Algebra 70, 7575–7598 (2022)

    MathSciNet  Google Scholar 

  31. Jarlebring, E., Kvaal, S., Michiels, W.: An inverse iteration method for eigenvalue problems with eigenvector nonlinearities. SIAM J. Sci. Comput. 36, A1978–A2001 (2014)

    MathSciNet  Google Scholar 

  32. Kilmer, M.E., Braman, K., Hao, N., Hoover, R.C.: Third-order tensors as operators on matrices: a theoretical and computational framework with applications in imaging. SIAM J. Matrix Anal. Appl. 34, 148–172 (2013)

    MathSciNet  Google Scholar 

  33. Kilmer, M.E., Martin, C.D.: Factorization strategies for third-order tensors. Linear Algebra Appl. 435, 641–658 (2011)

    MathSciNet  Google Scholar 

  34. Li, N., Kindermann, S., Navasca, C.: Some convergence results on the regularized alternating least-squares method for tensor decomposition. Linear Algebra Appl. 438, 796–812 (2013)

    MathSciNet  Google Scholar 

  35. Li, X., Ng, M.K.: Solving sparse non-negative tensor equations: algorithms and applications. Front. Math. China 10, 649–680 (2015)

    MathSciNet  Google Scholar 

  36. Lock, E.F.: Tensor-on-tensor regression. J. Comput. Graph. Stat. 27, 638–647 (2018)

    MathSciNet  Google Scholar 

  37. Lu, C., Feng, J., Chen, Y., Liu, W., Lin, Z., Yan, S.: Tensor robust principal component analysis: exact recovery of corrupted low-rank tensors via convex optimization. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5249–5257 (2016)

  38. Lu, C., Feng, J., Chen, Y., Liu, W., Lin, Z., Yan, S.: Tensor robust principal component analysis with a new tensor nuclear norm. IEEE Trans. Pattern Anal. Mach. Intell. 42, 925–938 (2019)

    Google Scholar 

  39. Lu, S., Pereverzev, S.V., Tautenhahn, U.: Regularized total least squares: computational aspects and error bounds. SIAM J. Matrix Anal. Appl. 31, 918–941 (2010)

    MathSciNet  Google Scholar 

  40. Lund, K.: The tensor t-function: a definition for functions of third-order tensors. Numer. Linear Algebra Appl. 27, e2288 (2020)

    MathSciNet  Google Scholar 

  41. Ma, A., Molitor, D.: Randomized Kaczmarz for tensor linear systems. BIT Numer. Math. 62, 171–194 (2022)

    MathSciNet  Google Scholar 

  42. Martin, C.D., Shafer, R., LaRue, B.: An order-p tensor factorization with applications in imaging. SIAM J. Sci. Comput. 35, A474–A490 (2013)

    MathSciNet  Google Scholar 

  43. Mastronardi, N., Lemmerling, P., Van Huffel, S.: Fast structured total least squares algorithm for solving the basic deconvolution problem. SIAM J. Matrix Anal. Appl. 22, 533–553 (2000)

    MathSciNet  Google Scholar 

  44. Mehrmann, V., Voss, H.: Nonlinear eigenvalue problems: a challenge for modern eigenvalue methods. GAMM Mitt. 27, 121–152 (2004)

    MathSciNet  Google Scholar 

  45. Miao, Y., Qi, L., Wei, Y.: Generalized tensor function via the tensor singular value decomposition based on the T-product. Linear Algebra Appl. 590, 258–303 (2020)

    MathSciNet  Google Scholar 

  46. Miao, Y., Qi, L., Wei, Y.: T-Jordan canonical form and T-Drazin inverse based on the T-product. Commun. Appl. Math. Comput. 3, 201–220 (2021)

    MathSciNet  Google Scholar 

  47. Miao, Y., Wang, T., Wei, Y.: Stochastic conditioning of tensor functions based on the tensor-tensor product. Pac. J. Optim. 19, 205–235 (2023)

    MathSciNet  Google Scholar 

  48. Newman, E., Horesh, L., Avron, H., Kilmer, M.: Stable tensor neural networks for rapid deep learning. arXiv preprint arXiv:1811.06569 (2018)

  49. Newman, E., Kilmer, M., Horesh, L.: Image classification using local tensor singular value decompositions. In: 2017 IEEE 7th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP), pp. 1–5 (2017)

  50. Reichel, L., Ugwu, U.O.: Tensor Krylov subspace methods with an invertible linear transform product applied to image processing. Appl. Numer. Math. 166, 186–207 (2021)

    MathSciNet  Google Scholar 

  51. Reichel, L., Ugwu, U.O.: The tensor Golub-Kahan-Tikhonov method applied to the solution of ill-posed problems with a t-product structure. Numer. Linear Algebra Appl. 29, e2412 (2022)

    MathSciNet  Google Scholar 

  52. Reichel, L., Ugwu, U.O.: Weighted tensor Golub–Kahan–Tikhonov-type methods applied to image processing using a T-product. J. Comput. Appl. Math. 415, 114488 (2022)

    MathSciNet  Google Scholar 

  53. Renaut, R.A., Guo, H.: Efficient algorithms for solution of regularized total least squares. SIAM J. Matrix Anal. Appl. 26, 457–476 (2004)

    MathSciNet  Google Scholar 

  54. Rosen, J.B., Park, H., Glick, J.: Total least norm formulation and solution for structured problems. SIAM J. Matrix Anal. Appl. 17, 110–126 (1996)

    MathSciNet  Google Scholar 

  55. Sima, D.M., Van Huffel, S., Golub, G.H.: Regularized total least squares based on quadratic eigenvalue problem solvers. BIT Numer. Math. 44, 793–812 (2004)

    MathSciNet  Google Scholar 

  56. Stewart, G.W.: Updating a rank-revealing ULV decomposition. SIAM J. Matrix Anal. Appl. 14, 494–499 (1993)

    MathSciNet  Google Scholar 

  57. Van Huffel, S. (ed.): Recent Advances in Total Least Squares Techniques and Errors-in-Variables Modeling. Society for Industrial and Applied Mathematics, Philadelphia (1997)

    Google Scholar 

  58. Van Huffel, S., Lemmerling, P. (eds.): Total Least Squares and Errors-in-Variables Modeling: Analysis, Algorithms and Applications. Kluwer Academic Publishers, Dordrecht (2002)

    Google Scholar 

  59. Van Huffel, S., Vandewalle, J.: Algebraic relationships between classical regression and total least-squares estimation. Linear Algebra Appl. 93, 149–160 (1987)

    MathSciNet  Google Scholar 

  60. Van Huffel, S., Vandewalle, J.: The Total Least Squares Problem: Computational Aspects and Analysis. SIAM, Philadelphia (1991)

    Google Scholar 

  61. Vasilescu, M.A.O., Terzopoulos, D.: Multilinear analysis of image ensembles: TensorFaces. In: European Conference on Computer Vision, pp. 447–460. Springer (2002)

  62. Voss, H.: Nonlinear eigenvalue problems. In: Hogben, L. (ed.) Handbook of Linear Algebra, 2nd edn. CRC Press, Boca Raton (2014)

    Google Scholar 

  63. Wang, X., Che, M., Wei, Y.: Tensor neural network models for tensor singular value decompositions. Comput. Optim. Appl. 75, 753–777 (2020)

    MathSciNet  Google Scholar 

  64. Wang, X., Wei, P., Wei, Y.: A fixed point iterative method for third-order tensor linear complementarity problems. J. Optim. Theory Appl. 197, 334–357 (2023)

    MathSciNet  Google Scholar 

  65. Wei, P., Wang, X., Wei, Y.: Neural network models for time-varying tensor complementarity problems. Neurocomputing 523, 18–32 (2023)

    Google Scholar 

  66. Xie, P., Xiang, H., Wei, Y.: Randomized algorithms for total least squares problems. Numer. Linear Algebra Appl. 26, e2219 (2019)

    MathSciNet  Google Scholar 

  67. Zare, H., Hajarian, M.: An efficient Gauss-Newton algorithm for solving regularized total least squares problems. Numer. Algorithms 89, 1049–1073 (2022)

    MathSciNet  Google Scholar 

  68. Zhang, Z., Aeron, S.: Exact tensor completion using t-SVD. IEEE Trans. Signal Process. 65, 1511–1526 (2016)

    MathSciNet  Google Scholar 

  69. Zhang, Z., Ely, G., Aeron, S., Hao, N., Kilmer, M.: Novel methods for multilinear data completion and de-noising based on tensor-SVD. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3842–3849 (2014)

  70. Zhou, H., Li, L., Zhu, H.: Tensor regression with applications in neuroimaging data analysis. J. Am. Stat. Assoc. 108, 540–552 (2013)

    MathSciNet  Google Scholar 

  71. Zhou, P., Lu, C., Lin, Z., Zhang, C.: Tensor factorization for low-rank tensor completion. IEEE Trans. Image Process. 27, 1152–1163 (2017)

    MathSciNet  Google Scholar 

Download references

Acknowledgements

We would like to thank the handling editor and two referees for their detailed comments. We also thank Professors Ren-cang Li, Tie-xiang Li, Wen-wei Lin, Qiang Ye and Lei-hong Zhang for their useful suggestions. F. Han is supported by the Science and Technology Commission of Shanghai Municipality under Grant 23JC1400501 and Joint Research Project between China and Serbia under the Grant 2024-6-7. Y. Wei is supported by the National Natural Science Foundation of China under Grant 12271108 and the Ministry of Science and Technology of China under Grant G2023132005L. P. Xie is supported by the National Natural Science Foundation of China under Grant 12271108. Partial work is finished when he visited Fudan University during 2023–2024.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yimin Wei.

Ethics declarations

Conflict of interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Additional information

Communicated by Charles Dossal.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Han, F., Wei, Y. & Xie, P. Regularized and Structured Tensor Total Least Squares Methods with Applications. J Optim Theory Appl 202, 1101–1136 (2024). https://doi.org/10.1007/s10957-024-02507-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10957-024-02507-1

Keywords

Mathematics Subject Classification