Abstract
Gaussian processes are a popular and effective Bayesian method for classification and regression. Generating sparse Gaussian processes is a hot research topic, since Gaussian processes have to face the problem of cubic time complexity with respect to the size of the training set. Inspired by the idea of multi-task learning, we believe that simultaneously selecting subsets of multiple Gaussian processes will be more suitable than selecting them separately. In this paper, we propose an improved multi-task sparsity regularizer which can effectively regularize the subset selection of multiple tasks for multi-task sparse Gaussian processes. In particular, based on the multi-task sparsity regularizer proposed in [12], we perform two improvements: 1) replacing a subset of points with a rough global structure when measuring the global consistency of one point; 2) performing normalization on each dimension of every data set before sparsification. We combine the regularizer with two methods to demonstrate its effectiveness. Experimental results on four real data sets show its superiority.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Rasmussen, C., Williams, C.: Gaussian process for machine learning. MIT Press, Cambridge (2006)
Sun, S.: Infinite mixtures of multivariate Gaussian processes. In: Proceedings of the International Conference on Machine Learning and Cybernetics, pp. 1011–1016 (2013)
Bonilla, E., Chai, K.M., Williams, C.K.I.: Multi-task Gaussian process prediction. In: Proceedings of the Neural Information Processing Systems, pp. 1–8 (2008)
Williams, C., Seeger, M.: Using the Nystr\(\ddot{o}\)m method to speed up kernel machines. In: Advances in Neural Information Processing Systems, vol. 13, pp. 682–688 (2001)
Lawrence, N., Seeger, M., Herbrich, R.: Fast sparse Gaussian process methods: The informative vector machine. In: Advances in Neural Information Processing Systems, vol. 15, pp. 609–616 (2002)
Titsias, M.: Variational learning of inducing variables in sparse Gaussian processes. In: Proceedings of the 12th International Workshop on Artificial Intelligence and Statistics, pp. 567–574 (2009)
Dhillon, P., Foster, D., Ungar, L.: Minimum description length penalization for group and multi-task sparse learning. Journal of Machine Learning Research 12, 525–564 (2011)
Jebara, T.: Multitask sparsity via maximum entropy discrimination. Journal of Machine Learning Research 12, 75–110 (2011)
Lawrence, N., Platt, J.: Learning to learn with the informative vector machine. In: Proceedings of International Conference on Machine Learning, pp. 1–8 (2004)
Wang, Y., Khardon, R.: Sparse gaussian processes for multi-task learning. In: Flach, P.A., De Bie, T., Cristianini, N. (eds.) ECML PKDD 2012, Part I. LNCS, vol. 7523, pp. 711–727. Springer, Heidelberg (2012)
Bonilla, E., Agakov, F., Williams, C.: Kernel multi-task learning using task-specific features. In: Proceedings of International Conference on Artificial Intelligence and Statistics, pp. 43–50 (2007)
Zhu, J., Sun, S.: Single-task and multitask Gaussian processes. In: Proceedings of the International Conference on Machine Learning and Cybernetics, pp. 1033–1038 (2013)
Zhu, J., Sun, S.: Sparse Gaussian processes with manifold-preserving graph reduction. Neurocomputing 138, 99–105 (2014)
Sun, S., Hussain, Z., Shawe-Taylor, J.: Manifold-preserving graph reduction for sparse semi-supervised learning. Neurocomputing 124, 13–21 (2014)
Sun, S.: Multitask learning for EEG-based biometrics. In: Proceedings of the 19th International Conference on Pattern Recognition, pp. 1–4 (2008)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Zhu, J., Sun, S. (2014). Multi-task Sparse Gaussian Processes with Improved Multi-task Sparsity Regularization. In: Li, S., Liu, C., Wang, Y. (eds) Pattern Recognition. CCPR 2014. Communications in Computer and Information Science, vol 483. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-45646-0_6
Download citation
DOI: https://doi.org/10.1007/978-3-662-45646-0_6
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-662-45645-3
Online ISBN: 978-3-662-45646-0
eBook Packages: Computer ScienceComputer Science (R0)