Abstract
Variational Bayes learning is widely used in statistical models that contain hidden variables, for example, normal mixtures, binomial mixtures, and hidden Markov models. To derive the variational Bayes learning algorithm, we need to determine the hyperparameters in the a priori distribution. In the present paper, we propose two different methods by which to optimize the hyperparameters for the two different purposes. In the first method, the hyperparameter is determined for minimization of the generalization error. In the second method, the hyperparameter is chosen so that the unknown hidden structure in the data can be discovered. Experiments are conducted to show that the optimal hyperparameters are different for the generalized learning and knowledge discovery.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Watanabe, K., Watanabe, S.: Stochastic complexities of general mixture models in Variational Bayesian Approximation. Neural Computation 18(5), 1007–1065 (2006)
Nakajima, S., Watanabe, S.: Variational Bayes Solution of Linear Neural Networks and its Generalization Performance. Neural Computation 19(4), 1112–1153 (2007)
Bishop, C.M.: Pattern Recognition and Machine Learning. Springer, Heidelberg (2006)
Watanabe, S.: Algebraic analysis for singular statistical estimation. In: Watanabe, O., Yokomori, T. (eds.) ALT 1999. LNCS (LNAI), vol. 1720, pp. 39–50. Springer, Heidelberg (1999)
Watanabe, S.: Algebraic Analysis for Nonidentifiable LearningMachines. Neural Computation 13(4), 899–933 (2001)
Watanabe, S.: Learning efficiency of redundant neural networksin Bayesian estimation. IEEE Transactions on NeuralNetworks 12(6), 1475–1486 (2001)
Hofmann, T.: Probabilistic Latent Semantic Indexing. In: Proc. of SIGIR 1999, pp. 50–57 (1999)
Attias, H.: Inferring parameters and structure of latent variable models by variational Bayes. In: Proceedings of Uncertainty in Artificial Intelligence (UAI 1999) (1999)
Beal, M.J.: Variational Algorithms for approximate Bayesian inference. PhD thesis, University College London (2003)
Lazarsfeld, P.F., Henry, N.W.: Latent structure analysis. Houghton Mifflin, Boston (1968)
Segaran, T.: Programming Collective Intelligence: Building Smart Web 2.0 Applications. O’Reilly Media, Inc. (2007)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2009 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Kaji, D., Watanabe, S. (2009). Optimal Hyperparameters for Generalized Learning and Knowledge Discovery in Variational Bayes. In: Leung, C.S., Lee, M., Chan, J.H. (eds) Neural Information Processing. ICONIP 2009. Lecture Notes in Computer Science, vol 5863. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-10677-4_54
Download citation
DOI: https://doi.org/10.1007/978-3-642-10677-4_54
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-10676-7
Online ISBN: 978-3-642-10677-4
eBook Packages: Computer ScienceComputer Science (R0)