Skip to main content
Log in

A dynamic hypergraph regularized non-negative tucker decomposition framework for multiway data analysis

  • Original Article
  • Published:
International Journal of Machine Learning and Cybernetics Aims and scope Submit manuscript

Abstract

Non-negative tensor decomposition has achieved significant success in machine learning due to its superiority in extracting the non-negative parts-based features and physically meaningful latent components from high-order data. To improve its representation ability, hypergraph has been incorporated into the tensor decomposition model to capture the nonlinear manifold structure of data. However, previous hypergraph regularized tensor decomposition methods rely on the original data space. This may result in inaccurate manifold structure and representation performance degeneration when original data suffer from noise corruption. To solve these problems, in this paper, we propose a dynamic hypergraph regularized non-negative Tucker decomposition (DHNTD) method for multiway data analysis. Specifically, to take full advantage of the multilinear structure and nonlinear manifold of tensor data, we learn the dynamic hypergraph and non-negative low-dimensional representation in a unified framework. Moreover, we develop a multiplicative update (MU) algorithm to solve our optimization problem and theoretically prove its convergence. Experimental results in clustering tasks using six image datasets demonstrate the superiority of our proposed method compared with the state-of-the-art methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Notes

  1. Available at http://www.cad.zju.edu.cn/home/dengcai/Data/MLData.html.

  2. Available at https://scikit-learn.org/stable/.

  3. Available at http://www.cad.zju.edu.cn/home/dengcai/Data/FaceData.html.

  4. Available at https://github.com/ZJULearning/MatlabFunc/tree/master/MatrixFactorization.

  5. Available at https://github.com/huangsd/NMFAN.

References

  1. Kolda TG, Bader BW (2009) Tensor decompositions and applications. IAM Rev 51(3):455–500

    MathSciNet  MATH  Google Scholar 

  2. Qiu Y, Sun W, Zhang Y, Gu X, Zhou G (2021) Approximately orthogonal nonnegative tucker decomposition for flexible multiway clustering. Sci China Technol Sci 64(9):1872–1880

    Article  Google Scholar 

  3. Huang Z, Qiu Y, Zhao Q, Zhou G (2021) Bayesian robust tucker decomposition for multiway data analysis. In: 2021 China automation congress (CAC), IEEE, p 5559–5564

  4. Huang H, Ma Z, Zhang G (2022) Dimensionality reduction of tensors based on manifold-regularized tucker decomposition and its iterative solution. Int J Mach Learn Cybern 13(2):509–522

    Article  Google Scholar 

  5. Wang AD, Jin Z, Yang JY (2020) A faster tensor robust PCA via tensor factorization. Int J Mach Learn Cybern 11(12):2771–2791

    Article  Google Scholar 

  6. Oseledets IV (2011) Tensor-train decomposition. SIAM J Sci Comput 33(5):2295–2317

    Article  MathSciNet  Google Scholar 

  7. Qiu Y, Zhou G, Huang Z, Zhao Q, Xie S (2022) Efficient tensor robust PCA under hybrid model of tucker and tensor train. IEEE Signal Process Lett 29:627–631

    Article  Google Scholar 

  8. Zhao Q, Zhou G, Xie S, Zhang L, Cichocki A (2016) Tensor ring decomposition. arXiv preprint arXiv:1606.05535

  9. Yu Y, Xie K, Yu J, Jiang Q, Xie S (2021) Fast nonnegative tensor ring decomposition based on the modulus method and low-rank approximation. Sci China Technol Sci 64(9):1843–1853

    Article  Google Scholar 

  10. Cichocki A, Mandic D, De Lathauwer L, Zhou G, Zhao Q, Caiafa C, Phan HA (2015) Tensor decompositions for signal processing applications: from two-way to multiway component analysis. IEEE Signal Process Mag 32(2):145–163

    Article  Google Scholar 

  11. Chen X, Zhou G, Wang Y, Hou M, Zhao Q, Xie S (2020) Accommodating multiple tasks’ disparities with distributed knowledge-sharing mechanism. IEEE Trans Cybern

  12. Qiu Y, Zhou G, Chen X, Zhang D, Zhao X, Zhao Q (2021) Semi-supervised non-negative tucker decomposition for tensor data representation. Sci China Technol Sci 64(9):1881–1892

    Article  Google Scholar 

  13. Huang Z, Qiu Y, Sun W (2021) Recognition of motor imagery EEG patterns based on common feature analysis. Brain Comput Interfaces 8(4):128–136

    Article  Google Scholar 

  14. Kim YD, Choi S (2007) Nonnegative tucker decomposition. In: 2007 IEEE conference on computer vision and pattern recognition, IEEE, pp 1–8

  15. Hao N., Horesh L., Kilmer M. E. (2014) Nonnegative tensor decomposition. In: Carmi Avishy Y., Mihaylova Lyudmila, Godsill Simon J. (eds) Compressed sensing & sparse filtering. Springer Berlin Heidelberg, Berlin, Heidelberg, pp 123–148. https://doi.org/10.1007/978-3-642-38398-4_5

    Chapter  Google Scholar 

  16. Lee Namgil, Phan Anh-Huy, Cong Fengyu, Cichocki Andrzej (2016) Nonnegative tensor train decompositions for multi-domain feature extraction and clustering. In: Hirose Akira, Ozawa Seiichi, Doya Kenji, Ikeda Kazushi, Lee Minho, Liu Derong (eds) Neural information processing. Springer International Publishing, Cham, pp 87–95. https://doi.org/10.1007/978-3-319-46675-0_10

    Chapter  Google Scholar 

  17. Yu Y, Zhou G, Zheng N, Qiu Y, Xie S, Zhao Q (2022) Graph-regularized non-negative tensor-ring decomposition for multiway representation learning. IEEE Trans Cybern

  18. Zhou G, Cichocki A, Zhao Q, Xie S (2015) Efficient nonnegative tucker decompositions: algorithms and uniqueness. IEEE Trans Image Process 24(12):4990–5003

    Article  MathSciNet  Google Scholar 

  19. Zhou G, Cichocki A, Zhao Q, Xie S (2014) Nonnegative matrix and tensor factorizations: an algorithmic perspective. IEEE Signal Process Mag 31(3):54–65

    Article  Google Scholar 

  20. Zhou G, Cichocki A, Xie S (2012) Fast nonnegative matrix/tensor factorization based on low-rank approximation. IEEE Trans Signal Process 60(6):2928–2940

    Article  MathSciNet  Google Scholar 

  21. Cichocki A, Zdunek R, Phan AH, Si Amari (2009) Nonnegative matrix and tensor factorizations: applications to exploratory multi-way data analysis and blind source separation. Wiley, Hoboken

    Book  Google Scholar 

  22. Yokota T, Zdunek R, Cichocki A, Yamashita Y (2015) Smooth nonnegative matrix and tensor factorizations for robust multi-way data analysis. Signal Process 113:234–249

    Article  Google Scholar 

  23. Xu Y (2015) Alternating proximal gradient method for sparse nonnegative tucker decomposition. Math Program Comput 7(1):39–70

    Article  MathSciNet  Google Scholar 

  24. Wu Q, Zhang L, Cichocki A (2014) Multifactor sparse feature extraction using convolutive nonnegative tucker decomposition. Neurocomputing 129:17–24

    Article  Google Scholar 

  25. Cai D, He X, Han J, Huang TS (2010) Graph regularized nonnegative matrix factorization for data representation. IEEE Trans Pattern Anal Mach Intell 33(8):1548–1560

    Google Scholar 

  26. Wang JJY, Bensmail H, Gao X (2013) Multiple graph regularized nonnegative matrix factorization. Pattern Recognit 46(10):2840–2847

    Article  Google Scholar 

  27. Sun J, Cai X, Sun F, Hong R (2017) Dual graph-regularized constrained nonnegative matrix factorization for image clustering. KSII Trans Internet Inf Syst (TIIS) 11(5):2607–2627

    MathSciNet  Google Scholar 

  28. Li X, Cui G, Dong Y (2016) Graph regularized non-negative low-rank matrix factorization for image clustering. IEEE Trans Cybern 47(11):3840–3853

    Article  Google Scholar 

  29. Li X, Ng MK, Cong G, Ye Y, Wu Q (2016) MR-NTD: manifold regularization nonnegative tucker decomposition for tensor data dimension reduction and representation. IEEE Trans Neural Netw Learn Syst 28(8):1787–1800

    Article  MathSciNet  Google Scholar 

  30. Qiu Y, Zhou G, Zhang Y, Xie S (2019) Graph regularized nonnegative tucker decomposition for tensor data representation. In: ICASSP 2019-2019 IEEE international conference on acoustics, speech and signal processing (ICASSP), IEEE, p 8613–8617

  31. Sofuoglu SE, Aviyente S (2020) Graph regularized tensor train decomposition. In: ICASSP 2020-2020 IEEE international conference on acoustics, speech and signal processing (ICASSP), IEEE, p 3912–3916

  32. Gao Y, Zhang Z, Lin H, Zhao X, Du S, Zou C (2020) Hypergraph learning: methods and practices. IEEE Trans Pattern Anal Mach Intell

  33. Huang S, Elgammal A, Yang D (2017) On the effect of hyperedge weights on hypergraph learning. Image Vis Comput 57:89–101

    Article  Google Scholar 

  34. Zhou D, Huang J, Schölkopf B (2006) Learning with hypergraphs: Clustering, classification, and embedding. Adv Neural Inf Process Syst 19

  35. Zeng K, Yu J, Li C, You J, Jin T (2014) Image clustering by hyper-graph regularized non-negative matrix factorization. Neurocomputing 138:209–217

    Article  Google Scholar 

  36. Qian B, Shen X, Shu Z, Gu X, Huang J, Hu J (2018) Hyper-graph regularized multi-view matrix factorization for vehicle identification. In: International conference on cloud computing and security, Springer, p 543–554

  37. Yu N, Gao YL, Liu JX, Wang J, Shang J (2019) Robust hypergraph regularized non-negative matrix factorization for sample clustering and feature selection in multi-view gene expression data. Hum Genom 13(1):1–10

    Google Scholar 

  38. Huang S, Wang H, Ge Y, Huangfu L, Zhang X, Yang D (2018) Improved hypergraph regularized nonnegative matrix factorization with sparse representation. Pattern Recognit Lett 102:8–14

    Article  Google Scholar 

  39. Yin W, Ma Z, Liu Q (2021) Hyperntf: A hypergraph regularized nonnegative tensor factorization for dimensionality reduction. arXiv preprint arXiv:2101.06827

  40. Kang Z, Lin Z, Zhu X, Xu W (2021) Structured graph learning for scalable subspace clustering: from single view to multiview. IEEE Trans Cybern

  41. Lin Z, Kang Z, Zhang L, Tian L (2021) Multi-view attributed graph clustering. IEEE Trans Knowl Data Eng

  42. Huang Y, Liu Q, Metaxas D (2009) Video object segmentation by hypergraph cut. In: 2009 IEEE conference on computer vision and pattern recognition, IEEE, p 1738–1745

  43. Gao Y, Wang M, Tao D, Ji R, Dai Q (2012) 3-D object retrieval and recognition with hypergraph analysis. IEEE Trans Image Process 21(9):4290–4303

    Article  MathSciNet  Google Scholar 

  44. Pei X, Chen C, Gong W (2016) Concept factorization with adaptive neighbors for document clustering. IEEE Trans Neural Netw Learn Syst 29(2):343–352

    Article  MathSciNet  Google Scholar 

  45. Huang S, Xu Z, Kang Z, Ren Y (2020) Regularized nonnegative matrix factorization with adaptive local structure learning. Neurocomputing 382:196–209

    Article  Google Scholar 

  46. Yu J, Tao D, Wang M (2012) Adaptive hypergraph learning and its application in image classification. IEEE Trans Image Process 21(7):3262–3272

    Article  MathSciNet  Google Scholar 

  47. Zhang Z, Lin H, Gao Y, BNRist K (2018) Dynamic hypergraph structure learning. In: IJCAI, p 3162–3169

  48. Hackbusch W (2012) Tensor spaces and numerical tensor calculus, vol 42. Springer, Berlin

    Book  Google Scholar 

  49. Lee DD, Seung HS (1999) Learning the parts of objects by non-negative matrix factorization. Nature 401(6755):788–791

    Article  Google Scholar 

  50. Wang M, Liu X, Wu X (2015) Visual classification by \(l_1\)-hypergraph modeling. IEEE Trans Knowl Data Eng 27(9):2564–2574

    Article  Google Scholar 

  51. Jin T, Yu Z, Gao Y, Gao S, Sun X, Li C (2019) Robust \(l_2\)-hypergraph and its applications. Inf Sci 501:708–723

    Article  Google Scholar 

  52. Wang Y, Yin W, Zeng J (2019) Global convergence of ADMM in nonconvex nonsmooth optimization. J Sci Comput 78(1):29–63

    Article  MathSciNet  Google Scholar 

  53. Guan N, Tao D, Luo Z, Shawe-Taylor J (2012) MahNMF: Manhattan non-negative matrix factorization. Statistics 1050:14

    Google Scholar 

  54. Bonnans JF, Gilbert JC, Lemaréchal C, Sagastizábal CA (2006) Numerical optimization: theoretical and practical aspects. Springer Science & Business Media, Berlin

    MATH  Google Scholar 

  55. Xu, W, Liu X, Gong Y (2003) Document clustering based on non-negative matrix factorization. In: Proceedings of the 26th annual international ACM SIGIR conference on Research and development in information retrieval, p 267–273

  56. MacQueen J, et al (1967) Some methods for classification and analysis of multivariate observations. In: Proceedings of the fifth Berkeley symposium on mathematical statistics and probability, Oakland, CA, USA, vol 1, p 281–297

  57. Qiu Y, Zhou G, Wang Y, Zhang Y, Xie S (2020) A generalized graph regularized non-negative tucker decomposition framework for tensor data representation. IEEE Trans Cybern

  58. Van der Maaten L, Hinton G (2008) Visualizing data using t-SNE. J Mach Learn Res 9(11)

Download references

Acknowledgements

This work is supported by the National Natural Science Foundation of China under Grant 62073087, 62071132, Key-Area Research and Development Program of Guangdong Province under Grant 2019B010154002. The data that support the finding of this study are available online.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Guoxu Zhou.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

To prove the convergence of DHNTD-MU algorithm when we update \(\{{\mathbf {A}}^{(n)}\}_{n=1}^{N}\) and \(\varvec{{\mathcal {C}}}\), we use a similar procedure as in the literature [25]. We firstly consider the update of \({\mathbf {A}}^{(N)}\). The objective function \(f_a({\mathbf {A}}^{(N)})\) of DHNTD can be rewritten as follows:

$$\begin{aligned} \begin{aligned}&f_a({\mathbf {A}}^{(N)})=\frac{1}{2}\left\| {\mathbf {X}}_{(N)} - {\mathbf {A}}^{(N)}{{\mathbf {B}}^{(N)}}^{\top } \right\| _F^2\\&\quad +\frac{\lambda }{2}{\text {Tr}}\left( {{\mathbf {A}}^{(N)}}^{\top } {\mathbf {L}}_{hyper}{\mathbf {A}}^{(N)}\right) .\\&\quad =\frac{1}{2}\sum _{i=1}^{I_N}\sum _{j=1}^{I_1\cdots I_{N-1}}{\left( {\left[ {\mathbf {X}}_{(N)}\right] }_{ij}-\sum _{k=1}^{R_N} {\left[ {\mathbf {A}}^{(N)}\right] }_{ik}{\left[ {\mathbf {B}}^{(N)}\right] }_{jk}\right) }^2\\&\quad +\frac{\lambda }{2}\sum _{k=1}^{R_N}\sum _{i=1}^{I_N}\sum _{l=1}^{I_N}\left( {\left[ {\mathbf {A}}^{(N)}\right] }_{ik} {\left[ {\mathbf {L}}_{hyper}\right] }_{il}{\left[ {\mathbf {A}}^{(N)}\right] }_{lk}\right) \end{aligned} \end{aligned}$$
(55)

For any element \(a^{(N)}_{ij} (i.e., a^{(N)}_{ij} = {[{\mathbf {A}}^{(N)}]}_{ij})\) in \({\mathbf {A}}^{(N)}\), let \(F_{ij}\) be the part of \(f_a({\mathbf {A}}^{(N)})\) that is only relevant to \({\mathbf {A}}^{(N)}_{ij}\). We have

$$\begin{aligned} \begin{aligned}&F_{ij}^{\prime } = {\left[ \frac{\partial f_a({\mathbf {A}}^{(N)})}{\partial {\mathbf {A}}^{(N)}} \right] }_{ij}\\&\quad ={\left[ {\mathbf {A}}^{(N)}{{\mathbf {B}}^{(N)}}^{\top }{\mathbf {B}}^{(N)} -{\mathbf {X}}_{(N)}{\mathbf {B}}^{(N)}+\lambda {\mathbf {L}}_{hyper}{\mathbf {A}}^{(N)}\right] }_{ij}, \end{aligned} \end{aligned}$$
(56)
$$\begin{aligned} \begin{aligned}&F_{ij}^{\prime \prime } = {\left[ {{\mathbf {B}}^{(N)}}^{\top }{\mathbf {B}}^{(N)}\right] }_{jj} +\lambda \left[ {\mathbf {L}}_{hyper}\right] _{ii}. \end{aligned} \end{aligned}$$
(57)

Since our update rule for \({\mathbf {A}}^{(N)}\) is essentially element wise, it is only necessary to prove that each \(F_{ij}\) is nonincreasing under the updating step of Eq. (24).

Proof of Lemma 1:

$$\begin{aligned} F\left( v^{t+1}\right) \le T\left( v^{t+1}, v^{t}\right) \le T\left( v^{t}, v^{t}\right) =F\left( v^{t}\right) . \end{aligned}$$

\(\square\)

Proof of Lemma 2:

It is obvious that \(T\left( a,a\right) = F_{ij}\left( a\right)\). Therefore, we just need to prove \(T\left( a,{a_{ij}^{(N)}}^{t}\right) \ge F_{ij}\left( a\right)\). We firstly perform Taylor series expansion of \(F_{ij}\left( a\right)\),

$$\begin{aligned} \begin{aligned}&F_{i j}(v)= F_{i j}\left( {a_{ij}^{(N)}}^{t}\right) +F_{i j}^{\prime }\left( {a_{ij}^{(N)}}^{t}\right) \left( a-{a_{ij}^{(N)}}^{t}\right) \\&\quad +{\left( {\left[ {{\mathbf {B}}^{(N)}}^{\top }{\mathbf {B}}^{(N)}\right] }_{jj}+\lambda \left[ {\mathbf {L}}_{hyper}\right] _{ii}\right) }\left( a-{a_{ij}^{(N)}}^{t}\right) ^{2}. \end{aligned} \end{aligned}$$
(58)

According to Eq. (34), we can find that \(T\left( a,{a_{ij}^{(N)}}^{t}\right) \ge F_{ij}\left( a\right)\) is equivalaent to

$$\begin{aligned} \begin{aligned}&\frac{{\left[ {\mathbf {A}}^{(N)}{{\mathbf {B}}^{(N)}}^{\top }{\mathbf {B}}^{(N)}\right] }_{ij}+\lambda {\left[ {\mathbf {I}} {\mathbf {A}}^{(N)}\right] }_{ij}}{{a_{ij}^{(N)}}^{t}} \\&\quad \ge {\left[ {{\mathbf {B}}^{(N)}}^{\top }{\mathbf {B}}^{(N)}\right] }_{jj}+\lambda \left[ {\mathbf {L}}_{hyper}\right] _{ii}. \end{aligned} \end{aligned}$$
(59)

We have

$$\begin{aligned} \begin{aligned}&{\left[ {\mathbf {A}}^{(N)}{{\mathbf {B}}^{(N)}}^{\top }{\mathbf {B}}^{(N)}\right] }_{ij} \\&\quad =\sum _{l=1}^{R_N}{a_{il}^{(N)}}^{t}\left[ {{\mathbf {B}}^{(N)}}^{\top }{\mathbf {B}}^{(N)}\right] _{lj}\ge {a_{ij}^{(N)}}^{t}\left[ {{\mathbf {B}}^{(N)}}^{\top }{\mathbf {B}}^{(N)}\right] _{jj}, \end{aligned} \end{aligned}$$
(60)

and

$$\begin{aligned} \begin{aligned} \lambda {\left[ {\mathbf {I}} {\mathbf {A}}^{(N)}\right] }_{ij}&=\lambda \sum _{m=1}^{I_N}I_{im}{a_{mj}^{(N)}}^{t} \ge \lambda I_{ii}{a_{ij}^{(N)}}^{t}\\&\ge \lambda \left[ {\mathbf {I}}-\varvec{\Theta }\right] _{ii}{a_{ij}^{(N)}}^{t} = \lambda \left[ {\mathbf {L}}_{hyper}\right] _{ii}{a_{ij}^{(N)}}^{t}. \end{aligned} \end{aligned}$$
(61)

Thus, \(T\left( a,{a_{ij}^{(N)}}^{t}\right) \ge F_{ij}\left( a\right)\) holds and \(T\left( a,{a_{ij}^{(N)}}^{t}\right)\) is an auxiliary function of \(F_{ij}\). So the update rule of \({\mathbf {A}}^{(N)}\) results in a nonincreasing of the objective function \(f_a({\mathbf {A}}^{(N)})\). As Eq. (34) is an auxiliary function, \(F_{ij}\) is nonincreasing under this update rule, which leads to a nonincreasing of the the objective function \(f_a({\mathbf {A}}^{(N)})\). \(\square\)

Similarly, if \(\lambda = 0\), the above proof is also valid for updating \({\mathbf {A}}^{(n)}, \left( n\ne N\right)\). Here, we complete that the update rules of \(\{{\mathbf {A}}^{(n)}\}_{n=1}^{N}\) result in a nonincreasing of the objective functions \(f_a({\mathbf {A}}^{(n)}),n=1,2, \ldots ,N\). \(\square\)

Secondly, we consider the update rule of \(\varvec{{\mathcal {C}}}\). The objective function \(f_c({\text {vec}}(\varvec{{\mathcal {C}}}))\) of DHNTD can be rewritten as follow:

$$\begin{aligned} \begin{aligned} f_c({\text {vec}}(\varvec{{\mathcal {C}}}))=&\frac{1}{2}\left\| {\text {vec}} (\varvec{{\mathcal {X}}})-{\mathbf {Q}}{\text {vec}}(\varvec{{\mathcal {C}}}) \right\| _F^2\\ =&\frac{1}{2}\sum _{q=1}^{I_1\cdots I_N} {\left( \left[ {\text {vec}}(\varvec{{\mathcal {X}}})\right] _q-\sum _{l=1}^{R_1 \cdots R_N}\left[ Q\right] _{ql}\left[ {\text {vec}}\left( \varvec{{\mathcal {C}}}\right) \right] _{l}\right) }^{2}. \end{aligned} \end{aligned}$$
(62)

For any element \(c_l (i.e., c_l=\left[ {\text {vec}}\left( \varvec{{\mathcal {C}}}\right) \right] _{l})\) in \({\text {vec}}\left( \varvec{{\mathcal {C}}}\right)\), let \(C_l\) be the part of \(f_c({\text {vec}}(\varvec{{\mathcal {C}}}))\), which is relevant to \(c_l\). It is easy to check that

$$\begin{aligned} \begin{aligned} C_l^{\prime } = \left[ {\mathbf {Q}}^{\top }{\mathbf {Q}}{\text {vec}}(\varvec{{\mathcal {C}}}) -{\mathbf {Q}}^{\top }{\text {vec}}(\varvec{{\mathcal {X}}})\right] _{l}, \end{aligned} \end{aligned}$$
(63)
$$\begin{aligned} \begin{aligned} C_l^{\prime \prime } = \left[ {\mathbf {Q}}^{\top }{\mathbf {Q}}\right] _{ll}. \end{aligned} \end{aligned}$$
(64)

Since our update rule for \(\varvec{{\mathcal {C}}}\) is essentially element wise, we want to show that \(C_l\) is nonincreasing when updating \(\varvec{{\mathcal {C}}}\) as Eq. (27).

Proof of Lemma 3:

It is obvious that \(T(c,c) = C_l(c)\), and we only need to prove that \(T(c,c_l^t) \ge C_l(c)\). Again, we perform Taylor expansion of \(C_l(c)\),

$$\begin{aligned} \begin{aligned} C_l(c) = C_l\left( c_l^{t}\right) + C_l^{\prime }\left( c_l^{t}\right) (c-c_l^{t}) + \left[ {\mathbf {Q}}^{\top }{\mathbf {Q}}\right] _{ll}(c-c_l^{t})^2, \end{aligned} \end{aligned}$$
(65)

with Eq. (35) to find that \(T(c,c_l^t) \ge C_l(c)\) is equivalent to

$$\begin{aligned} \begin{aligned} \left[ {\mathbf {Q}}^{\top }{\mathbf {Q}}{\text {vec}}(\varvec{{\mathcal {C}}})\right] _l \ge \left[ {\mathbf {Q}}^{\top }{\mathbf {Q}}\right] _{ll}. \end{aligned} \end{aligned}$$
(66)

We have

$$\begin{aligned} \begin{aligned} \left[ {\mathbf {Q}}^{\top }{\mathbf {Q}}{\text {vec}}(\varvec{{\mathcal {C}}})\right] _l = \sum _{r=1}^{R_1\cdots R_N} \left[ {\mathbf {Q}}^{\top }{\mathbf {Q}}\right] _{lr}c_r^t \ge \left[ {\mathbf {Q}}^{\top }{\mathbf {Q}}\right] _{ll}c_l^t. \end{aligned} \end{aligned}$$
(67)

Therefore, \(T(c,c_l^t) \ge C_l(c)\) holds and \(T(c,c_l^t)\) is an auxiliary function of \(C_l\). Up to here, we can show that the update rule of \(\varvec{{\mathcal {C}}}\) results in a nonincreasing of the objective function \(f_c({\text {vec}}(\varvec{{\mathcal {C}}}))\). Since Eq. (35) is an auxiliary function, \(C_l\) is nonincreasing under this update rule, which leads to a nonincreasing of the objective function \(f_c({\text {vec}}(\varvec{{\mathcal {C}}}))\). \(\square\)

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Huang, Z., Zhou, G., Qiu, Y. et al. A dynamic hypergraph regularized non-negative tucker decomposition framework for multiway data analysis. Int. J. Mach. Learn. & Cyber. 13, 3691–3710 (2022). https://doi.org/10.1007/s13042-022-01620-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13042-022-01620-9

Keywords

Navigation