Abstract
In the multi-view regression problem, we have a regression problem where the input variable (which is a real vector) can be partitioned into two different views, where it is assumed that either view of the input is sufficient to make accurate predictions — this is essentially (a significantly weaker version of) the co-training assumption for the regression problem.
We provide a semi-supervised algorithm which first uses unlabeled data to learn a norm (or, equivalently, a kernel) and then uses labeled data in a ridge regression algorithm (with this induced norm) to provide the predictor. The unlabeled data is used via canonical correlation analysis (CCA, which is a closely related to PCA for two random variables) to derive an appropriate norm over functions. We are able to characterize the intrinsic dimensionality of the subsequent ridge regression problem (which uses this norm) by the correlation coefficients provided by CCA in a rather simple expression. Interestingly, the norm used by the ridge regression algorithm is derived from CCA, unlike in standard kernel methods where a special apriori norm is assumed (i.e. a Banach space is assumed). We discuss how this result shows that unlabeled data can decrease the sample complexity.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Abney, S.: Understanding the Yarowsky Algorithm. Comput. Linguist. 30(3), 365–395 (2004) ISSN 0891-2017
Balcan, M.-F., Blum, A.: A PAC-Style Model for Learning from Labeled and Unlabeled Data. In: Semi-Supervised Learning, pp. 111–126. MIT Press, Cambridge (2006)
Becker, S.: Mutual information maximization: Models of cortical self-organization. Network: Computation in Neural Systems (1996)
Becker, S., Hinton, G.-E.: Self-organizing neural network that discovers surfaces in random-dot stereograms. Nature 355(6356), 161–163 (1992), doi:10.1038/355161a0
Blum, A., Mitchell, T.: Combining labeled and unlabeled data with co-training. In: COLT’ 98. Proceedings of the eleventh annual conference on Computational learning theory, Madison, Wisconsin, United States, pp. 92–100. ACM Press, New York, NY, USA (1998) ISBN 1-58113-057-0
Brefeld, U., Gartner, T., Scheffer, T., Wrobel, S.: Efficient co-regularised least squares regression. In: ICML ’06: Proceedings of the 23rd international conference on Machine learning, Pittsburgh, Pennsylvania, pp. 137–144. ACM Press, New York, NY, USA (2006) ISBN 1-59593-383-2
Chechik, G., Globerson, A., Tishby, N., Weiss, Y.: Information bottleneck for gaussian variables (2003), citeseer.ist.psu.edu/article/chechik03information.html
Dasgupta, S., Littman, M.-L., Mcallester, D.: Pac generalization bounds for co-training (2001)
Farquhar, J.D.R., Hardoon, D.R., Meng, H., Shawe-Taylor, J., Szedmák, S.: Two view learning: Svm-2k, theory and practice. In: NIPS, 2005.
Hardoon, D.R., Szedmak, S.R., Shawe-Taylor, J.R.: Canonical Correlation Analysis: An overview with application to learning methods. Neural Comput. 16(12), 2639–2664 (2004) ISSN 0899-7667
Hotelling, H.: The most predictable criterion. Journal of Educational Psychology (1935)
Rosenberg, D., Bartlett, P.: The rademacher complexity of co-regularized kernel classes. submitted. In: Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics (2007)
Sindhwani, V., Niyogi, P., Belkin, M.: A co-regularized approach to semi-supervised learning with multiple views. In: Proceedings of the ICML Workshop on Learning with Multiple Views (2005)
Tishby, N., Pereira, F., Bialek, W.: The information bottleneck method. In: Proceedings of the 37-th Annual Allerton Conference on Communication, Control and Computing, pp. 368–377 (1999), URL citeseer.ist.psu.edu/tishby99information.html
Yarowsky, D.: Unsupervised word sense disambiguation rivaling supervised methods. In: Proceedings of the 33rd annual meeting on Association for Computational Linguistics, pp. 189–196, Morristown, NJ, USA, Association for Computational Linguistics (1995)
Zhang, T.: Learning Bounds for Kernel Regression Using Effective Data Dimensionality. Neural Comput. 17(9), 2077–2098 (2005) ISSN 0899-7667
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 2007 Springer Berlin Heidelberg
About this paper
Cite this paper
Kakade, S.M., Foster, D.P. (2007). Multi-view Regression Via Canonical Correlation Analysis. In: Bshouty, N.H., Gentile, C. (eds) Learning Theory. COLT 2007. Lecture Notes in Computer Science(), vol 4539. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-72927-3_8
Download citation
DOI: https://doi.org/10.1007/978-3-540-72927-3_8
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-72925-9
Online ISBN: 978-3-540-72927-3
eBook Packages: Computer ScienceComputer Science (R0)