Abstract
This paper extends the framework of independent component analysis (ICA) to supervised learning. The key idea is to find a conditionally independent representation of input variables for given output. The representation is useful for the naive Bayes learning which has been reported to perform as well as more sophisticated methods. The learning algorithm is derived in a similar criterion to ICA. Two dimensional entropy takes an important role, while one dimensional entropy does in ICA.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Akaho, S., Kiuchi, Y., Umeyama, S.: MICA: Multimodal independent component analysis. In Proc. of IJCNN (1999) 927–932
Becker, S.: Mutual Information Maximization: Models of Cortical Self-Organization. Network: Computation in Neural Systems, 7 (1996)
Bell, A. J., Sejnowski, T.J.: The ‘independent components’ of natural scenes are edge filters. Vision Research, 37 (1997) 3327–3338
Cowell, R.G., Dawid, A.P., Lauritzen, S.L., Spiegelhalter, D.J.: Probabilistic Networks and Expert Systems. Springer (1999)
Domingos, P. and Pazzani, M.: On the optimality of the simple Bayesian classifier under zero-one loss. Machine Learning, 29 (1997) 103–130
Frank, E., Leonard, T., Holmes, G., Witten, I.H.: Naive Bayes for regression. Machine Learning, 41 (2000) 5–25
Friedman, J.: On bias, variance, 0/1-loss, and the curse-of-dimensionality. Data Mining and Knowledge Discovery, 1 (1997) 55–77
Simonoff, J.S.: Smoothing Methods in Statistics. Springer-Verlag (1998)
Yang H., Amari, S.: Adaptive online learning algorithms for blind separation: Maximum entropy and minimum mutual information. Neural Computation, 9 (1997) 1457–1482
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2001 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Akaho, S. (2001). Conditionally Independent Component Extraction for Naive Bayes Inference. In: Dorffner, G., Bischof, H., Hornik, K. (eds) Artificial Neural Networks — ICANN 2001. ICANN 2001. Lecture Notes in Computer Science, vol 2130. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-44668-0_75
Download citation
DOI: https://doi.org/10.1007/3-540-44668-0_75
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-42486-4
Online ISBN: 978-3-540-44668-2
eBook Packages: Springer Book Archive