Abstract
In order to improve action recognition accuracy, a new nonnegative matrix factorization with local constraint (LC-NMF) is firstly presented. By applying it for effective trajectory clustering, complex backgrounds are removed and then the motion salient regions are obtained. Secondly, a nonnegative matrix factorization with temporal dependencies constraint (TD-NMF) is proposed, which fully mines the spatiotemporal relationship in a video not only between adjacent frames, but also between multi-interval frames. Meanwhile, the introduction of \( l_{2,1} \)-norm makes the spatiotemporal features possess better sparseness and robustness. In addition, these features are directly learned from data and thus have an inherent generalization ability. Finally, a Deep NMF method is established, which takes the proposed TD-NMF as the unit algorithm of each layer. By introducing the hierarchical feature extraction strategy, the base matrix of the first layer is gradually decomposed; then, it is supplemented and completed layer by layer. Consequently, the more complete and accurate local feature estimations are obtained, and then the discriminative and expressive abilities of features are effectively enhanced and recognition performance is further improved. Adequate and extensive experiments verify the effectiveness of the proposed methods. Moreover, the update rules and convergence proofs for LC-NMF and TD-NMF are also given.
Similar content being viewed by others
References
Gao Z, Zhang H, Liu AA, Xu G, Xue Y (2016) Human action recognition on depth dataset. Neural Comput Appl 27(7):2047–2054
Laptev I, Marszalek M, Schmid C, Rozenfeld B (2008) Learning realistic human actions from movies. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp 1–8
Wang H, Oneata D, Verbeek J, Schmid C (2016) A robust and efficient video representation for action recognition. Int J Comput Vis 119(3):219–238
Caetano C, dos Santos JA, Schwartz WR (2016) Optical flow co-occurrence matrices: a novel spatiotemporal feature descriptor. In: Proceedings of the international conference on pattern recognition (ICPR), pp 1947–1952
Colque RVHM, Caetano C, de Andrade MTL, Schwartz WR (2017) Histograms of optical flow orientation and magnitude and entropy to detect anomalous events in videos. IEEE Trans Circuits Syst Video Technol 27(3):673–682
Dollar P, Rabaud V, Cottrell G, Belongie S (2005) Behavior recognition via sparse spatio-temporal features. In: IEEE international workshop on visual surveillance and performance evaluation of tracking and surveillance, pp 65–72
Wang H, Kläser A, Schmid C, Liu CL (2013) Dense trajectories and motion boundary descriptors for action recognition. Int J Comput Vis 103(1):60–79
Sun L, Jia K, Chan TH, Fang Y, Wang G, Yan S (2014) DL-SFA: deeply-learned slow feature analysis for action recognition In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp 2625–2632
Le QV, Zou WY, Yeung SY, Ng AY (2011) Learning hierarchical invariant spatio-temporal features for action recognition with independent subspace analysis. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp 3361–3368
Lee DD, Seung HS (1999) Learning the parts of objects by non-negative matrix factorization. Nature 401(6755):788–791
Hinton GE, Osindero S, Teh YW (2006) A fast learning algorithm for deep belief nets. Neural Comput 18(7):1527–1554
Taheri S, Qiu Q, Chellappa R (2014) Structure-preserving sparse decomposition for facial expression analysis. IEEE Trans Image Process 23(8):3590–3603
Sun Y, Quan Y, Fu J (2018) Sparse coding and dictionary learning with class-specific group sparsity. Neural Comput Appl 30(4):1265–1275
Wang X, Gao L, Song J, Zhen X, Sebe N, Shen HT (2018) Deep appearance and motion learning for egocentric activity recognition. Neurocomputing 275:438–447
Wold S, Esbensen K, Geladi P (1987) Principal component analysis. Chemom Intell Lab Syst 2(1–3):37–52
Delorme A, Makeig S (2004) EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. J Neurosci Methods 134(1):9–21
Lu Y, Lai Z, Xu Y, Zhang D, Yuan C (2017) Nonnegative discriminant matrix factorization. IEEE Trans Circuits Syst Video Technol 27(7):1392–1405
Gong C, Tao D, Fu K, Yang J (2015) Fick’s law assisted propagation for semisupervised learning. IEEE Trans Neural Netw Learn Syst 26(9):2148–2162
Li Z, Tang J, He X (2018) Robust structured nonnegative matrix factorization for image representation. IEEE Trans Neural Netw Learn Syst 29(5):1947–1960
Tepper M, Sapiro G (2016) Compressed nonnegative matrix factorization is fast and accurate. IEEE Trans Signal Process 64(9):2269–2283
Trigeorgis G, Bousmalis K, Zafeiriou S, Schuller BW (2017) A deep matrix factorization method for learning attribute representations. IEEE Trans Pattern Anal Mach Intell 39(3):417–429
Thurau C, Hlavác V (2008) Pose primitive based human action recognition in videos or still images. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp 1–8
Yang Y, Tu D, Li G (2014) Gait recognition using flow histogram energy image. In: Proceedings of the international conference on pattern recognition (ICPR), pp 444–449
Zhang H, Cao X, Ho JKL, Chow TWS (2017) Object-level video advertising: an optimization framework. IEEE Trans Ind Inf 13(2):520–531
Yi Y, Lin M (2016) Human action recognition with graph-based multiple-instance learning. Pattern Recognit 53:148–162
Cho J, Lee M, Chang HJ, Oh S (2014) Robust action recognition using local motion and group sparsity. Pattern Recognit 47(5):1813–1825
Hu W, Choi KS, Wang P, Jiang Y, Wang S (2015) Convex nonnegative matrix factorization with manifold regularization. Neural Netw 63:94–103
Jain AK (2010) Data clustering: 50 years beyond K-means. Pattern Recognit Lett 31(8):651–666
Arias-Castro E, Lerman G, Zhang T (2017) Spectral clustering based on local PCA. J Mach Learn Res 18(1):253–309
Tian Y, Ruan Q, An G, Liu R (2014) Local non-negative component representation for human action recognition. In: Proceedings of the IEEE international conference on signal processing (ICSP), pp 1317–1320
Vollmer C, Hellbach S, Eggert J, Gross HM (2014) Sparse coding of human motion trajectories with non-negative matrix factorization. Neurocomputing 124:22–32
Zafeiriou L, Nikitidis S, Zafeiriou S, Pantic M (2014) Slow features nonnegative matrix factorization for temporal data decomposition. In: Proceedings of the IEEE international conference on image processing (ICIP), pp 1430–1434
Zafeiriou L, Panagakis Y, Pantic M, Zafeiriou S (2017) Nonnegative decompositions for dynamic visual data Analysis. IEEE Trans Image Process 26(12):5603–5617
Xiao Q, Cheng J, Jiang J, Feng W (2014) Position-based action recognition using high dimension index tree. In: Proceedings of the international conference on pattern recognition (ICPR), pp 4400–4405
Ji X, Cheng J, Tao D (2015) Local mean spatio-temporal feature for depth image-based speed-up action recognition. In: Proceedings of the IEEE international conference on image processing (ICIP), pp 2389–2393
Roth PM, Mauthner T, Khan I, Bischof H (2009) Efficient human action recognition by cascaded linear classification. In: Proceedings of the IEEE international conference on computer vision workshops (ICCV Workshops), pp 546–553
Wang H, Yuan C, Hu W, Ling H, Yang W, Sun C (2014) Action recognition using nonnegative action component representation and sparse basis selection. IEEE Trans Image Process 23(2):570–581
Wang J, Zhang P, Luo L (2016) Nonnegative component representation with hierarchical dictionary learning strategy for action recognition. IEICE Trans Inf Syst 99(4):1259–1263
Sekma M, Mejdoub M, Amar CB (2015) Human action recognition based on multi-layer fisher vector encoding method. Pattern Recognit Lett 65:37–43
Yu YF, Dai DQ, Ren CX, Hang KK (2017) Discriminative multi-layer illumination-robust feature extraction for face recognition. Pattern Recognit 67:201–212
Zhang H, Ji Y, Huang W, Liu L (2018) Sitcom-star-based clothing retrieval for video advertising: a deep learning framework. Neural Comput Appl. https://doi.org/10.1007/s00521-018-3579-x
Liao Q, Zhang Q (2016) Local coordinate based graph-regularized NMF for image representation. Signal Process 124:103–114
Wang H, Schmid C (2013) Action recognition with improved trajectories. In: Proceedings of IEEE international conference on computer vision (ICCV), pp 3551–3558
Nicholson WB, Matteson DS, Bien J (2014) Structured regularization for large vector autoregression. Cornell University, Ithaca
Gorelick L, Blank M, Shechtman E, Irani M, Basri R (2007) Actions as space-time shapes. IEEE Trans Pattern Anal Mach Intell 29(12):2247–2253
Schuldt C, Laptev I, Caputo B (2004) Recognizing human actions: a local SVM approach. In: Proceedings of the international conference on pattern recognition (ICPR), pp 32–36
Rodriguez MD, Ahmed J, Shah M (2008) Action mach a spatio-temporal maximum average correlation height filter for action recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp 1–8
Liu J, Luo J, Shah M (2009) Recognizing realistic actions from videos “in the wild”. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp 1996–2003
Kimura K, Kudo M, Tanaka Y (2016) A column-wise update algorithm for nonnegative matrix factorization in Bregman divergence with an orthogonal constraint. Mach Learn 103(2):285–306
Allab K, Labiod L, Nadif M (2017) A semi-NMF-PCA unified framework for data clustering. IEEE Trans Knowl Data Eng 29(1):2–16
Zhang X, Yang Y, Jia H, Zhou H, Jiao L (2014) Low-rank representation based action recognition. In: Proceedings of the IEEE international joint conference on neural networks (IJCNN), pp 1812–1818
Sheng B, Yang W, Sun C (2015) Action recognition using direction-dependent feature pairs and non-negative low rank sparse model. Neurocomputing 158:73–80
Kulkarni K, Turaga P (2016) Reconstruction-free action inference from compressive imagers. IEEE Trans Pattern Anal Mach Intell 38(4):772–784
Barrett DP, Siskind JM (2016) Action recognition by time series of retinotopic appearance and motion features. IEEE Trans Circuits Syst Video Technol 26(12):2250–2263
Azhar F, Li CT (2017) Hierarchical relaxed partitioning system for activity recognition. IEEE Trans Cybern 47(3):784–795
Ji S, Xu W, Yang M, Yu K (2013) 3D convolutional neural networks for human action recognition. IEEE Trans Pattern Anal Mach Intell 35(1):221–231
Umakanthan S, Denman S, Fookes C, Sridharan S (2014) Multiple instance dictionary learning for activity representation. In: Proceedings of the international conference on pattern recognition (ICPR), pp 1377–1382
Liu AA, Xu N, Su YT, Hao T, Yang ZX (2015) Single/multi-view human action recognition via regularized multi-task learning. Neurocomputing 151:544–553
Leyva R, Sanchez V, Li CT (2016) A fast binary pair-based video descriptor for action recognition. In: Proceedings of the IEEE international conference on image processing (ICIP), pp 4185–4189
Baumann F, Ehlers A, Rosenhahn B, Liao J (2016) Recognizing human actions using novel space-time volume binary patterns. Neurocomputing 173:54–63
Lan T, Zhu Y, Roshan Zamir A, Savarse Silvio (2015) Action recognition by hierarchical mid-level action elements. In: Proceedings of the IEEE international conference on computer vision (ICCV), pp 4552–4560
Yuan C, Wu B, Li X, Hu W, Maybank S, Wang F (2016) Fusing R features and local features with context-aware kernels for action recognition. Int J Comput Vis 118(2):151–171
Yao T, Wang Z, Xie Z, Gao J, Feng DD (2017) Learning universal multiview dictionary for human action recognition. Pattern Recognit 64:236–244
Tian Y, Kong Y, Ruan Q, An G, Fu Y (2018) Hierarchical and spatio-temporal sparse representation for human action recognition. IEEE Trans Image Process 27(4):1748–1762
Yang Y, Saleemi I, Shah M (2013) Discovering motion primitives for unsupervised grouping and one-shot learning of human actions, gestures, and expressions. IEEE Trans Pattern Anal Mach Intell 35(7):1635–1648
Samanta S, Chanda B (2014) Space-time facet model for human activity classification. IEEE Trans Multimedia 16(6):1525–1535
Tian Y, Ruan Q, An G, Xu W (2015) Context and locality constrained linear coding for human action recognition. Neurocomputing 167:359–370
Chatzis SP, Kosmopoulos D (2015) A nonparametric bayesian approach toward stacked convolutional independent component analysis. In: Proceedings of the IEEE international conference on computer vision (ICCV), pp 2803–2811
Zhou T, Li N, Cheng X, Xu Q, Zhou L, Wu Z (2016) Learning semantic context feature-tree for action recognition via nearest neighbor fusion. Neurocomputing 201:1–11
Tian Y, Sukthankar R, Shah M (2013) Spatiotemporal deformable part models for action detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp 2642–2649
Hsu YP, Liu C, Chen TY, Fu LC (2016) Online view-invariant human action recognition using rgb-d spatio-temporal matrix. Pattern Recognit 60:215–226
Parisi GI, Wermter S (2017) Lifelong learning of action representations with deep neural self-organization. In: The AAAI 2017 spring symposium on science of intelligence: computational principles of natural and artificial intelligence, pp 608–612
Rodriguez M, Orrite C, Medrano C, Makris D (2017) One-shot learning of human activity with an map adapted GMM and simplex-HMM. IEEE Trans Cybern 47(7):1769–1780
Liu L, Shao L, Li X, Lu K (2016) Learning spatio-temporal representations for action recognition: a genetic programming approach. IEEE Trans Cybern 46(1):158–170
Cheng H, Liu Z, Hou L, Yang J (2016) Sparsity-induced similarity measure and its applications. IEEE Trans Circuits Syst Video Technol 26(4):613–626
Shao L, Liu L, Yu M (2016) Kernelized multiview projection for robust action recognition. Int J Comput Vis 118(2):115–129
Liu AA, Su YT, Nie WZ, Kankanhalli M (2017) Hierarchical clustering multi-task learning for joint human action grouping and recognition. IEEE Trans Pattern Anal Mach Intell 39(1):102–114
Shi Y, Tian Y, Wang Y, Huang T (2017) Sequential deep trajectory descriptor for action recognition with three-stream CNN. IEEE Trans Multimedia 19(7):1510–1520
Zhang S, Gao C, Zhang J, Chen F, Sang N (2018) Discriminative part selection for human action recognition. IEEE Trans Multimedia 20(4):769–780
Byrne J (2015) Nested motion descriptors. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp 502–510
Sun C, Junejo IN, Tappen M, Foroosh H (2015) Exploring sparseness and self-similarity for action recognition. IEEE Trans Image Process 24(8):2488–2501
Nguyen TV, Song Z, Yan S (2015) STAP: spatial-temporal attention-aware pooling for action recognition. IEEE Trans Circuits Syst Video Technol 25(1):77–86
Tian Y, Ruan Q, An G, Fu Y (2016) Action recognition using local consistent group sparse coding with spatio-temporal structure. In: Proceedings of the 2016 ACM on multimedia conference, pp 317–321
Xu W, Miao Z, Zhang XP, Tian Y (2017) A hierarchical spatio-temporal model for human activity recognition. IEEE Trans Multimedia 19(7):1494–1509
Yi Y, Zheng Z, Lin M (2017) Realistic action recognition with salient foreground trajectories. Expert Syst Appl 75:44–55
Yi Y, Cheng Y, Xu C (2017) Mining human movement evolution for complex action recognition. Expert Syst Appl 78:259–272
Rahmani H, Mian A, Shah M (2018) Learning a deep model for human action recognition from novel viewpoints. IEEE Trans Pattern Anal Mach Intell 40(3):667–681
Liu L, Shao L, Zheng F, Li X (2014) Realistic action recognition via sparsely-constructed Gaussian processes. Pattern Recognit 47(12):3819–3827
Kihl O, Picard D, Gosselin PH (2015) A unified framework for local visual descriptors evaluation. Pattern Recognit 48(4):1174–1184
Sun Q, Liu H, Ma L, Zhang T (2016) A novel hierarchical Bag-of-Words model for compact action representation. Neurocomputing 174:722–732
Acknowledgements
This work was supported partially by National Natural Science Foundation of China (Grant No. 61072110), Shaanxi Province Key Project of Research and Development Plan (S2018-YF-ZDGY-0187) and International Cooperation Project of Shaanxi Province (S2018-YF-GHMS-0061).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
All the authors of the manuscript declared that there are no potential conflicts of interest.
Human and animal rights
All the authors of the manuscript declared that there is no research involving human participants and/or animal.
Informed consent
All the authors of the manuscript declared that there is no material that required informed consent.
Appendices
Appendix 1: Proof of Theorem 1
To prove Theorem 1, it is necessary to show that the objective function in Eq. (6) is non-increasing under the update rules in Eqs. (11) and (18). The objective function is firstly proved to be non-increasing under the update rule in Eq. (11) in this paper and then is demonstrated to be non-increasing under the update rule in Eq. (18). The proof procedure will utilize an auxiliary function which is similar to that employed in the EM algorithm.
Definition 1
If the conditions \( G\left( {M,M^{\left( t \right)} } \right) \ge J\left( M \right) \) and \( G\left( {M,M} \right) = J\left( M \right) \) are satisfied, then \( G\left( {M,M^{\left( t \right)} } \right) \) is an auxiliary function of \( J\left( M \right) \).
Lemma 1
If\( G\left( {M,M^{\left( t \right)} } \right) \)is an auxiliary function of\( J\left( M \right) \), then\( J\left( M \right) \)is non-increasing under the following update rule:
Proof
\( J\left( {M^{{\left( {t + 1} \right)}} } \right) \le G\left( {M^{{\left( {t + 1} \right)}} ,M^{\left( t \right)} } \right) \le G\left( {M^{\left( t \right)} ,M^{\left( t \right)} } \right) = J\left( {M^{\left( t \right)} } \right) .\)
Now, the following will show that the update rule for \( {\varvec{H}} \) in Eq. (6) is exactly the update rule in Eq. (11) with a suitable auxiliary function.
Considering any element \( h_{kj} \) in \( {\varvec{H}} \), \( J_{kj} \) is used to represent the part of Eq. (6), which is only related to \( h_{kj} \). It is easy to check that:
Since the update rule is element-wise in essence, it is sufficient to demonstrate that each \( J_{kj} \) is non-increasing under the update rule of Eq. (6).
Lemma 2
Function (41) is an auxiliary function for\( J_{kj} \), which is the part of\( D_{{LC{ - }NMF}} \)and only related to\( h_{kj} \).
Proof
Since \( G\left( {h,h} \right) = J_{kj} \left( h \right) \) is obvious, it only needs to prove that \( G\left( {h,h_{kj}^{\left( t \right)} } \right) \ge J_{kj} \left( h \right) \). To do this, we make a comparison of the Taylor series expansion of \( J_{kj} \left( h \right) \) with Eq. (41):
and it can be found that: \( G\left( {h,h_{kj}^{\left( t \right)} } \right) \ge J_{kj} \left( h \right) \) is equivalent to
Meanwhile, the following equations hold:
Thus, Eq. (43) holds and \( G\left( {h,h_{kj}^{\left( t \right)} } \right) \ge J_{kj} \left( h \right) \).
Now, it can be demonstrated that the objective function of Theorem 1 is non-increasing under the update rule in Eq. (11).
Proof
Replace \( G\left( {h,h_{kj}^{\left( t \right)} } \right) \) in Eq. (38) by Eq. (41), and the following update rule can be obtained:
Since Eq. (41) is an auxiliary function, \( J_{kj} \) is non-increasing under this update rule.
In the following, the objective function is proved to be non-increasing under the update rule in Eq. (18).
Considering any element \( f_{ik} \) in \( {\varvec{F}} \), \( J_{ik} \) is used to represent the part of Eq. (6), which is only related to \( f_{ik} \). It is easy to check that:
Lemma 3
Function (49) is an auxiliary function for\( J_{ik} \), which is the part of\( D_{\text{LC - NMF}} \)and only related to\( f_{ik} \).
Proof
Since \( G\left( {f,f} \right) = J_{ik} \left( f \right) \) is obvious, it only needs to show that \( G\left( {f,f_{ik}^{\left( t \right)} } \right) \ge J_{ik} \left( f \right) \). To do this, we make a comparison of the Taylor series expansion of \( J_{ik} \left( f \right) \) with Eq. (49):
Due to the condition \( \left( {{\varvec{F}}^{T} {\varvec{F}}} \right)_{kk} { = }1 \), \( J_{ik} \left( f \right) \) can be rewritten as:
and it can be found that: \( G\left( {f,f_{ik}^{\left( t \right)} } \right) \ge J_{ik} \left( f \right) \) is equivalent to
Meanwhile, the following equations hold:
Thus, Eq. (52) holds and \( G\left( {f,f_{ik}^{\left( t \right)} } \right) \ge J_{ik} \left( f \right) \).
Now, the objective function of Theorem 1 can also be demonstrated to be non-increasing under the update rule in Eq. (18).
Proof
Replace \( G\left( {f,f_{ik}^{\left( t \right)} } \right) \) in Eq. (38) by Eq. (49), and the following update rule can be obtained:
Since Eq. (49) is an auxiliary function, \( J_{ik} \) is non-increasing under this update rule. Thus, Theorem 1 holds.
Appendix 2: Proof of Theorem 2
To prove Theorem 2, it is necessary to show that objective function in Eq. (24) is non-increasing under the update rules in Eqs. (32) and (33). The objective function is firstly proved to be non-increasing under the update rule in Eq. (32) and then is also demonstrated to be non-increasing under the update rule in Eq. (33). The proof will utilize an auxiliary function which is similar to that employed in the EM algorithm.
According to the Definition 1 and Lemma 1 in “Appendix 1,” it can be proved that the objective function of Theorem 2 is non-increasing under the update rule in Eq. (32).
Considering any element \( f_{ik} \) in \( {\varvec{F}} \), \( J_{ik} \) is utilized to represent the part of Eq. (24), which is only related to \( f_{ik} \). It is easy to check that:
Lemma 4
Function (59) is an auxiliary function for\( J_{ik} \), which is the part of\( D_{\text{TD - NMF}} \), and only related to\( f_{ik} \).
where\( {\varvec{Sum1}} = \sum\nolimits_{u \in U} {\left( {{\varvec{X}}\left( {{\varvec{P}}_{u}^{{\varvec{ + }}} {\varvec{P}}_{u}^{ + T} + {\varvec{P}}_{u}^{ - } {\varvec{P}}_{u}^{ - T} - {\varvec{P}}_{u}^{ + } {\varvec{P}}_{u}^{ - T} - {\varvec{P}}_{u}^{ - } {\varvec{P}}_{u}^{ + T} } \right){\varvec{X}}^{T} {\varvec{F}}{\text{diag}}\left( {{\varvec{w}}_{u} } \right)^{T} {\text{diag}}\left( {{\varvec{w}}_{u} } \right)} \right)_{ik} } \).
Proof
Since \( G\left( {f,f} \right) = J_{ik} \left( f \right) \) is obvious, it only needs to show that \( G\left( {f,f_{ik}^{\left( t \right)} } \right) \ge J_{ik} \left( f \right) \). To do this, we make a comparison of the Taylor series expansion of \( J_{ik} \left( f \right) \) with Eq. (59),
where \( {\varvec{Sum2}} = \sum\limits_{u \in U} {\left( {{\varvec{X}}\left( {{\varvec{P}}_{u}^{{\varvec{ + }}} {\varvec{P}}_{u}^{ + T} + {\varvec{P}}_{u}^{ - } {\varvec{P}}_{u}^{ - T} - {\varvec{P}}_{u}^{ + } {\varvec{P}}_{u}^{ - T} - {\varvec{P}}_{u}^{ - } {\varvec{P}}_{u}^{ + T} } \right){\varvec{X}}^{T} } \right)_{ii} \left( {{\text{diag}}\left( {{\varvec{w}}_{u} } \right)^{T} {\text{diag}}\left( {{\varvec{w}}_{u} } \right)} \right)}_{kk} \).
And it can be found that: \( G\left( {f,f_{ik}^{\left( t \right)} } \right) \ge J_{ik} \left( f \right) \) is equivalent to
Meanwhile, the following inequalities hold:
Thus, Eq. (61) holds and \( G\left( {f,f_{ik}^{\left( t \right)} } \right) \ge J_{ik} \left( f \right) \).
Now, the objective function of Theorem 2 can be demonstrated to be non-increasing under the update rule in Eq. (32).
Proof
Replace \( G\left( {f,f_{ik}^{\left( t \right)} } \right) \) in Eq. (38) by Eq. (59), and the following update rule can be obtained:
Since Eq. (59) is an auxiliary function, \( J_{ik} \) is non-increasing under this update rule.
In the following, the objective function is proved to be non-increasing under the update rule in Eq. (33).
Considering any element \( h_{kj} \) in \( {\varvec{H}} \), \( J_{ik} \) is used to represent the part of Eq. (24), which is only related to \( h_{kj} \). It is easy to check that:
Lemma 5
Function (68) is an auxiliary function for\( J_{kj} \), which is the part of\( D_{\text{TD - NMF}} \)and only related to\( h_{kj} \).
Proof
Since \( G\left( {h,h} \right) = J_{kj} \left( h \right) \) is obvious, it only needs to show that \( G\left( {h,h_{kj}^{\left( t \right)} } \right) \ge J_{kj} \left( h \right) \). To do this, we make a comparison of the Taylor series expansion of \( J_{kj} \left( h \right) \) with Eq. (68):
and it can be found that: \( G\left( {h,h_{kj}^{\left( t \right)} } \right) \ge J_{kj} \left( h \right) \) is equivalent to
Meanwhile, the following inequalities hold:
Thus, Eq. (70) holds and \( G\left( {h,h_{kj}^{\left( t \right)} } \right) \ge J_{kj} \left( h \right) \).
Now, the objective function of Theorem 2 can also be demonstrated to be non-increasing under the update rule in Eq. (33).
Proof
Replace \( G\left( {h,h_{kj}^{\left( t \right)} } \right) \) in Eq. (38) by Eq. (69), and the following update rule can be obtained:
Since Eq. (73) is an auxiliary function, \( J_{kj} \) is non-increasing under this update rule. Thus, Theorem 2 holds.
Rights and permissions
About this article
Cite this article
Tong, M., Chen, Y., Ma, L. et al. NMF with local constraint and Deep NMF with temporal dependencies constraint for action recognition. Neural Comput & Applic 32, 4481–4505 (2020). https://doi.org/10.1007/s00521-018-3685-9
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00521-018-3685-9