Abstract
Sampling latents from the posterior distribution efficiently and accurately is a fundamental problem for posterior inference. Markov chain Monte Carlo (MCMC) is such a useful tool to do that but at the cost of computational burden since it needs many transition steps to converge to the stationary distribution for each datapoint. Amortized variational inference within the framework of MCMC is thus proposed where the learned parameters of the model are shared by all observations. Langevin autoencoder is a newly proposed method that amortizes inference in parameter space. This paper generalizes the Langevin autoencoder by utilizing the stochastic gradient Nosé-Hoover Thermostat Hamiltonian Monte Carlo to conduct amortized updating of the parameters of the inference distribution. The proposed method improves variational inference accuracy for the latent by subtly dealing with the noise introduced by stochastic gradient without estimating that noise explicitly. Experiments benchmarking our method against baseline generative methods highlight the effectiveness of our proposed method.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013)
Gulrajani, I., et al.: PixelVAE: a latent variable model for natural image. arXiv:1611.05013 (2016)
Makhzani, A., Shlens, J., Jaitly, N., Goodfellow, I., Frey, B.: Adversarial autoencoder. arXiv:1511.05644 (2016)
Higgins, I., et al.: Beta-VAE: learning basic visual concepts with a constrained variational framework. In: 5th International Conference on Learning Representations, Toulon, France (2017)
Burgess, C.P., et al.: Understanding disentangling in β-VAE. arXiv:1804.03599 (2017)
Rezaabad, A.L., Vishwanath, S.: Learning representation by maximizing Mutual Information in VAE. arXiv:1912.13361 (2019)
Tolstikhin, I., Bousquet, O., Gelly, S., Schoelkopf, B.: Wasserstein auto-encoders. arXiv:1711.01558 (2019)
Xiao, Z., Kreis, K., Kautz, J., Vahdat, A.: VAEBM: a symbiosis between variation autoencoders and energy-based models. arXiv:2010.00654 (2021)
Ganguly, A., Earp, S.W.F.: An Introduction to Variational Inference. arXiv:2108.13083 (2021)
Blei, D.M., Kucukelbir, A., McAuliffe, J.D.: Variational inference: a review for statisticians. arXiv:1601.00670 (2017)
Caterini, A.L., Doucet, A., Sejdinovic, D.: Hamiltonian variational auto-encoder. arXiv:1805.11328v2 (2018)
Thin, A., Kotelevskii, N., Durmus, A., Panov, M., Moulines, E., Doucet, A.: Monte carlo variational auto-encoders. arXiv:2106.15921 (2021)
Salimans, T., Kingma, D.P., Welling, M.: Markov chain monte carlo and variational inference: bridging the gap. arXiv:1410.6460 (2015)
Wolf, C., Karl M., van der Smagt, P.: Variational Inference with Hamiltonian Monte Carlo. arXiv:1609.08203 (2016)
Taniguchi, S., Iwasawa, Y., Kumagai, W., Matsuo, Y.: Langevin autoencoders for learning deep latent variable models. arXiv:2209.07036 (2022)
Ding, N., Fang, Y.H., Babbush, R., Chen, C.Y., Skeel, R.D., Neven, H.: Bayesian sampling using stochastic gradient thermostats. In: Advances in Neural Information Processing Systems (2014)
Luo, R., Wang, J.H., Yang, Y.D., Zhu, Z.X., Wang, J.: Thermostat-assisted continuously-tempered Hamiltonian Monte Carlo for Bayesian learning. arXiv:1711.11511 (2018)
Leimkuhler B., Reich, S.: A metropolis adjusted Nose-Hoover thermostat. ESAIM: Math. Modelling Num. Anal. 43, 743–755 (2009)
Daxberger, E.: Laplace redux-effortless Bayesian deep learning. arXiv:2106.14806 (2021)
Chen, T.Q., Fox, E.B., Guestrin, C.: Stochastic Gradient Hamiltonian Monte Carlo. arXiv:1402.4102 (2014)
Dockhorn, T., Vahdat, A., Kreis, K.: Score-based generative modeling with critically-damped Langevin diffusion. arXiv:2112.07068 (2022)
Hoffman, M.D.: Learning deep latent gaussian models with Markov chain monte carlo. In: 34th International Conference on Machine Learning, pp.1510–1519. Sydney, NSW, Australia (2017)
Rezende, D., Mohamed, S.: Variational inference with normalizing flows. In: 32th International Conference on Machine Learning, Lille, France, pp. 1530–1538 (2015)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Yuan, Z., Xu, C., Lin, Z., Zhang, Z. (2024). Amortized Variational Inference via Nosé-Hoover Thermostat Hamiltonian Monte Carlo. In: Luo, B., Cheng, L., Wu, ZG., Li, H., Li, C. (eds) Neural Information Processing. ICONIP 2023. Lecture Notes in Computer Science, vol 14447. Springer, Singapore. https://doi.org/10.1007/978-981-99-8079-6_7
Download citation
DOI: https://doi.org/10.1007/978-981-99-8079-6_7
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-8078-9
Online ISBN: 978-981-99-8079-6
eBook Packages: Computer ScienceComputer Science (R0)