Skip to main content
Log in

Manifold ranking graph regularization non-negative matrix factorization with global and local structures

  • Industrial and commercial application
  • Published:
Pattern Analysis and Applications Aims and scope Submit manuscript

Abstract

Non-negative matrix factorization (NMF) is a recently popularized technique for learning parts-based, linear representations of non-negative data. Although the decomposition rate of NMF is very fast, it still suffers from the following deficiency: It only revealed the local geometry structure; global geometric information of data set is ignored. This paper proposes a manifold ranking graph regularization non-negative matrix factorization with local and global geometric structure (MRLGNMF) to overcome the above deficiency. In particular, MRLGNMF induces manifold ranking to the non-negative matrix factorization with Sinkhorn distance. Numerical results show that the new algorithm is superior to the existing algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  1. Lee Daniel D, Sebastian Seung H (1999) Learning the parts of objects by non-negative matrix factorization. Nature 401:788–791

    Article  Google Scholar 

  2. Lee DD, Sebastian Seung H (2001) Algorithms for non-negative matrix factorization. Proc Neural Inf Process Syst 13:556–562

    Google Scholar 

  3. Xu J, Xiang L, Wang G, Ganesan S, Feldman M, Shih NNC, Gilmore H, Madabhushi Anant (2015) Sparse non-negative matrix factorization (SNMF) based color unmixing for breast histopathological image analysis. Comput Med Imaging Graph 4(46):20–29

    Article  Google Scholar 

  4. Yan X, Guo J, Liu S, Cheng XQ, Wang Y (2012) Clustering short text using ncut-weighted non-negative matrix factorization. In: Conference on information and knowledge management, 2259–2262

  5. Li Y, Alioune N (2013) The non-negative matrix factorization toolbox for biological data mining. Source Code Biol Med 8(1):10–10

    Article  Google Scholar 

  6. Yang Z, Yuan Z, Laaksonen J (2011) Projective non-negative matrix factorization with applications to facial image processing. Int J Pattern Recognit Artif Intell 21(08):1353–1362

    Article  Google Scholar 

  7. Raj B, Virtanen T, Chaudhuri S, Singh R (2010) Non-negative matrix factorization based compensation of music for automatic speech recognition. In: Conference of the international speech communication association

  8. Lopes N, Ribeiro B (2010) Non-negative matrix factorization implementation using graphic processing units. In: Intelligent data engineering and automated learning, 275–283

  9. Paritosh P, Applegate BE, Jo JA (2012) Application of non-negative matrix factorization to multispectral FLIM data analysis. Biomed Optics Express 3(9):2244–2262

    Article  Google Scholar 

  10. Cai D, He X, Han J, Huang TS (2011) Graph regularized nonnegative matrix factorization for data representation. IEEE Trans Pattern Anal Mach Intell 33(8):1548–1560

    Article  Google Scholar 

  11. Tolic D, Antulov-Fantulin N, Kopriva I (2018) A nonlinear orthogonal non-negative matrix factorization approach to subspace clustering. Pattern Recognit 82(04):40–45

    Article  Google Scholar 

  12. Zhang H, Wang S, Xu X, Chow TWS, Jonathan Wu QM (2018) PTree2Vector: learning a vectorial representation for tree-structured data. IEEE Trans Neural Netw Learn Syst 99:1–15

    Google Scholar 

  13. Sandler R, Lindenbaum M (2011) Nonnegative matrix factorization with earth movers distance metric for image analysis. Pattern Anal Mach Intell 33(8):1590–1602

    Article  Google Scholar 

  14. Qian W, Hong B, Cai D, He X, Li X (2016) Non-negative matrix factorization with Sinkhorn distance. In: Proceedings of the twenty-fifth international joint conference on artificial intelligence (IJCAI-16), 1960–1966

  15. Rubner Y, Tomasi C, Guibas LJ (2000) The earth mover’s distance as a metric for image retrieval. Int J Comput Vis 40(2):99–121

    Article  Google Scholar 

  16. Cuturi M (2013) Sinkhorn distances: lightspeed computation of optimal transport. In: Advances in neural information processing systems, 2292–2300

  17. Ma L, Li H, Meng F, Qingbo W, Linfeng Xu (2017) Manifold-ranking embedded order preserving hashing for image semantic retrieval. J Vis Commun Image Represent 44(2017):29–39

    Article  Google Scholar 

  18. Nene SA, Nayar SK, Murase H (2011) Columbia object image library (COIL-20). In: Computer

  19. Samaria FS, Harter AC (1994) Parameterisation of a stochastic model for human face identification. In: IEEE workshop on applications of computer vision, 138–142

Download references

Acknowledgements

This research has been supported by the National Natural Science Foundation of China (No. 71561008, 11601012), Guangxi Natural Science Foundation (No. 2018GXNSFAA138169), Guangxi Key Laboratory of Cryptography and Information Security No. GCIS201708) Guangxi Key Laboratory of Automatic Detecting Technology and Instruments (No. YQ19111, YQ18112), GUET Excellent Graduate Thesis Program (16YJPYSS22).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiangli Li.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

A Appendix

A Appendix

In order to prove Theorem 1 and deduce u, v update iterative formulas, similar to reference [2], we set Definition 1 [2] and Lemma 1 [2].

Definition 1

 [2] \(G(\upsilon ,\upsilon ^{\prime })\) is an auxiliary function for F(v) if the conditions

$$\begin{aligned} G(\upsilon ,\upsilon ^{\prime })\ge F(v),\,\,G(v,v)=F(v), \end{aligned}$$
(A.1)

are satisfied.

The auxiliary function is very useful because of the following lemma.

Lemma 1

[2] IfGis an auxiliary function ofF, thenFis non-increasing under the update

$$\begin{aligned} \upsilon ^{t+1}=\arg \min _{v}G(v,v^{t}). \end{aligned}$$
(A.2)

Now, we will show that the update step for V in (3.4) with a proper auxiliary function. We rewrite the objective function \({\mathcal {O}}\) of MRLGNMF as follows:

$$\begin{aligned} {\mathcal {O}} &= d_{M}^{\lambda ,\gamma }(x,y)+{\mathcal {L}}_{F}+\gamma T_{r}(V^{T}LV)\nonumber \\ &= \min _{T_{pq}\ge 0}\sum _{p,q}(M\odot T)+\dfrac{1}{\lambda }H(T))+\gamma (\widetilde{KL}{(T1\Vert x)}\nonumber \\&\quad +\, \widetilde{KL}{(T^{T}1\Vert y)})\nonumber \\&\quad +\, \dfrac{\zeta }{2}\left( \sum ^{n}_{i,j=1}W_{ij}\left\| \frac{1}{\sqrt{D_{ii}}}F_{i}-\frac{1}{\sqrt{D_{jj}}}F_{j}\right\| ^{2}\right. \nonumber \\&\quad \left. +\mu \sum ^{n}_{i=1}\Vert F_{i}-V_{i}\Vert ^{2}\right) \nonumber \\&\quad +\, \sigma \sum ^{k}_{l=1}\sum ^{n}_{i=1}\sum ^{n}_{j=1}V_{jl}L_{ji}V_{il},\nonumber \\&\qquad \quad s.t.:U\ge 0,V\ge 0. \end{aligned}$$
(A.3)

Now, we prove the updated iteration formula u, v and Theorem 1.

Lemma 2

It can be seen from reference [2] that the function\(G(v,v^{q})\)is the auxiliary function of the objective function\({\mathcal {O}}\). The function\(G(v,v^{q})\)is related to the auxiliary function ofv, where

$$\begin{aligned} G(\upsilon ,\upsilon ^{q}) &= G_{1}(v,v^{q})+G_{2}(v,v^{q})+G_{3}(v,v^{q}), \end{aligned}$$
(A.4)
$$\begin{aligned} G_{1}(v,v^{q}) &= \sum _{i}(x_{i}\log x_{i}-x_{i})+\sum _{ia}U_{ia}v_{a}\nonumber \\&-\, \sum _{ia}v_{i}\frac{U_{ia}v^{t}_{a}}{\sum _{b}U_{kb}v^{t}_{b}}\nonumber \\&\quad \left( \log U_{ia}v_{a}-\log \frac{U_{ia}v^{t}_{a}}{\sum _{b}U_{ib}v^{t}_{b}}\right) , \end{aligned}$$
(A.5)
$$\begin{aligned} G_{2}(v,v^{q}) &= z(v)+z^{\prime }(v)(v-v^{q})+\frac{1}{2}\frac{2\varphi (v+f)}{v}(v-v^{q})^{2}, \end{aligned}$$
(A.6)
$$\begin{aligned} G_{3}(v,v^{q}) &= \sigma \sum ^{k}_{l=1}\sum ^{n}_{i=1}\sum ^{n}_{j=1}V_{jl}L_{ji}V_{il}, \end{aligned}$$
(A.7)

Now, we set \({\mathcal {L}}_{\mathcal {O}}\) is Lagrange function of the objective function \({\mathcal {O}}\), and the \({\mathcal {L}}_{\mathcal {O}}\) is rewritten as

$$\begin{aligned} {\mathcal {L}}_{\mathcal {O}} &= \min _{T_{pq}\ge 0}\sum _{p,q}(M\odot T)+\dfrac{1}{\lambda }H(T))+\gamma (\widetilde{KL}{(T1\Vert x)}\nonumber \\&+\, \widetilde{KL}{(T^{T}1\Vert y)})\nonumber \\&+\, \dfrac{\zeta }{2}\left( \sum ^{n}_{i,j=1}W_{ij}\left\| \frac{1}{\sqrt{D_{ii}}}F_{i}-\frac{1}{\sqrt{D_{jj}}}F_{j}\right\| ^{2}\right. \nonumber \\&\left. +\, \mu \Sigma ^{n}_{i=1}\Vert F_{i}-V_{i}\Vert ^{2}\right) \nonumber \\&+\, \sigma \sum ^{k}_{l=1}\sum ^{n}_{i=1}\sum ^{n}_{j=1}V_{jl}L_{ji}V_{il}+Tr(\varphi U^{T})+Tr(\phi V^{T}). \end{aligned}$$
(A.8)

Then, we find partial derivatives of \({\mathcal {L}}_{\mathcal {O}}\) with respect to U and V, with KKT conditions \(\varphi _{ij}u_{ij}=0\) and \(\phi _{ij}v_{ij}=0\), and based on the above many parts, we get the objective function \({\mathcal {O}}\) for u and v for the update iteration formula as follows:

$$\begin{aligned}&u_{ik}\leftarrow u_{ik}\frac{\sum _{s}v_{sk}\frac{\sum _{t}T^{*it}_{s}}{y_{is}}}{\sum _{s}v_{sk}}, \end{aligned}$$
(A.9)
$$\begin{aligned}&v_{jk}\leftarrow v_{jk}\frac{\sum _{s}u_{sk}\frac{\sum _{t}T^{*st}_{j}}{y_{sj}}+\varphi F^{+}_{jk}+\sigma (WV)^{\frac{1}{2}}_{jk}}{\sum _{s}u_{sk}+\sigma (2DV)^{\frac{1}{2}}_{jk}+\varphi v_{jk}+\varphi F^{-}_{jk} }, \end{aligned}$$
(A.10)

where \(\varphi =\zeta \mu \), F is updated in accordance with equation (3.2), and \(F_{ij}^{+}=\frac{|F_{ij}|+F_{ij}}{2}\), \(F_{ij}^{-}=\frac{|F_{ij}|-F_{ij}}{2}\).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, X., Yu, J., Dong, X. et al. Manifold ranking graph regularization non-negative matrix factorization with global and local structures. Pattern Anal Applic 23, 967–974 (2020). https://doi.org/10.1007/s10044-019-00832-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10044-019-00832-0

Keywords

Navigation