Skip to main content
Log in

Stick-Breaking Dependent Beta Processes with Variational Inference

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

The beta processes (BP) is a powerful nonparametric tool in feature learning, which is often used as the prior of Bernoulli process for choosing features from a feature dictionary. However, it still shows a limitation in processing some real-world data, as the stood and BP is independent of data. In practice, the probabilities of selecting features in the latent space for different observed data are different, and they are usually dependent on some information from data, such as the location or time information. For example, data with closer distances usually have similar features. This kind of information (location or time) often called covariates, which are ignored in most BP-related literature. To account this problem, we propose a variational inference based dependent beta processes (VDBP), in which the dependent beta process is constructed using the stick-breaking representation and the dependency on the covariates is captured by a Gaussian process prior. An elegant representation of variational inference for with VDBP prior is obtained, which offers the efficient training method for the models using VDBP as priors. Through instantiating a Bayesian factor analysis model with VDBP, we verify the effectiveness of the proposed VDBP in image denoising and image inpainting tasks.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  1. Kim J, Bukhari W, Lee M (2018) Feature analysis of unsupervised learning for multi-task classification using convolutional neural network. Neural Process Lett 47(3):783–797

    Article  Google Scholar 

  2. Ren W, Han M (2019) Classification of EEG signals using hybrid feature extraction and ensemble extreme learning machine. Neural Process Lett 50(2):1281–1301

    Article  Google Scholar 

  3. Sen S, Mitra M, Bhattacharyya A, Sarkar R, Schwenker F, Roy K (2019) Feature selection for recognition of online handwritten Bangla characters. Neural Process Lett 50(3):2281–2304

    Article  Google Scholar 

  4. Hjort NL (1990) Nonparametric Bayes estimators based on beta processes in models for life history data. Ann Stat 18:1259–1294

    Article  MathSciNet  Google Scholar 

  5. Ghahramani Z, Griffiths TL, Sollich P (2007) Bayesian nonparametric latent feature models. Bayesian Stat 8:201–226

    MathSciNet  MATH  Google Scholar 

  6. Thibaux R, Jordan MI (2007) Hierarchical beta processes and the Indian buffet process. In: Artificial intelligence and statistics, pp 564–571

  7. Knowles D, Ghahramani Z (2007) Infinite sparse factor analysis and infinite independent components analysis. In: International conference on independent component analysis and signal separation, pp 381–388

  8. Gershman SJ, Blei DM (2012) A tutorial on Bayesian nonparametric models. J Math Psychol 56:1–12

    Article  MathSciNet  Google Scholar 

  9. Paisley J, Carin L (2009) Nonparametric factor analysis with beta process priors. In: International conference on machine learning, pp 777–784

  10. Zhou M, Chen H (2009) Non-parametric Bayesian dictionary learning for sparse image representations. In: Advances in neural information processing systems, pp 2295–2303

  11. Ghahramani Z, Griffiths TL (2006) Infinite latent feature models and the Indian buffet process. In: Advances in neural information processing systems, pp 475–482

  12. Wang Y, Carin L (2012) Lévy measure decompositions for the beta and gamma processes. In: International conference on international conference on machine learning, pp 499–506

  13. Foti NJ, Williamson SA (2015) A survey of non-exchangeable priors for Bayesian nonparametric models. IEEE Trans Pattern Anal Mach Intell 37:359–371

    Article  Google Scholar 

  14. Ren L, Wang Y, Carin L (2011) The kernel beta process. In: Advances in neural information processing systems, pp 963–971

  15. Zhou M, Yang H (2011) Dependent hierarchical beta process for image interpolation and denoising. In: International conference on artificial intelligence and statistics, pp 883–891

  16. Wang Y (2014) Gaussian beta process. Master’s thesis, Duke University

  17. Williamson S, Orbanz P (2010) Dependent Indian buffet processes. In: International conference on artificial intelligence and statistics, pp 924–931

  18. James LF (2017) Bayesian Poisson calculus for latent feature modeling via generalized Indian buffet process priors. Ann Stat 45:2016–2045

    Article  MathSciNet  Google Scholar 

  19. Xuan J, Lu J (2018) Doubly nonparametric sparse nonnegative matrix factorization based on dependent Indian buffet processes. IEEE Trans Neural Netw Learn Syst 29:1835–1849

    Article  MathSciNet  Google Scholar 

  20. Paisley JW, Zaas AK, Woods CW, Ginsburg GS, Carin L (2010) A stick-breaking construction of the beta process. In: International conference on machine learning, pp 847–854

  21. Paisley JW, Carin L, Blei DM (2011) Variational inference for stick-breaking beta process priors. In: International conference on machine learning, pp 889–896

  22. Sethuraman J (1994) A constructive definition of Dirichlet priors. Stat Sin 4(2):639–650

    MathSciNet  MATH  Google Scholar 

  23. Paisley J, Jordan MI (2016) A constructive definition of the beta process. arXiv preprint arXiv:1604.00685

  24. Sun S, Paisley J, Liu Q (2017) Location dependent Dirichlet processes. In: International conference on intelligent science and big data engineering, pp 64–76

  25. Serra JG, Testa M (2017) Bayesian K-SVD using fast variational inference. IEEE Trans Image Process 26:3344–3359

    Article  MathSciNet  Google Scholar 

  26. Zhang K, Zuo W, Gu S, Zhang L (2017) Learning deep CNN denoiser prior for image restoration. In: Conference on computer vision and pattern recognition, pp 3929–3938

  27. Yang J, Qi Z, Shi Y (2020) Learning to incorporate structure knowledge for image inpainting. In: Association for the advancement of artificial intelligence

Download references

Acknowledgements

Zehui Cao and Jing Zhao are joint first authors. This work is supported by the National Natural Science Foundation of China under Projects 62076096 and 62006078, Shanghai Municipal Project 20511100900, Shanghai Knowledge Service Platform Project (No. ZF1213), and Chenguang Program (No. 19CG25) by Shanghai Education Development Foundation and Shanghai Municipal Education Commission. We also would like to thank Yi Zhang for her help in the experimental verification of the proposed methods.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shiliang Sun.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cao, Z., Zhao, J. & Sun, S. Stick-Breaking Dependent Beta Processes with Variational Inference. Neural Process Lett 53, 339–353 (2021). https://doi.org/10.1007/s11063-020-10392-8

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-020-10392-8

Keywords

Navigation