Abstract
We give a compression scheme for any maximum class of VC dimension d that compresses any sample consistent with a concept in the class to at most d unlabeled points from the domain of the sample.
Supported by NSF grant CCR CCR 9821087.
Some work on this paper was done while authors were visiting National ICT Australia.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Floyd, S.: Space-bounded learning and the Vapnik-Chervonenkis Dimension (Ph.D). PhD thesis, U.C. Berkeley, ICSI Tech Report TR-89-061 (December 1989)
Floyd, S., Warmuth, M.K.: Sample compression, learnability, and the Vapnik-Chervonenkis dimension. Machine Learning 21(3), 269–304 (1995)
Gurvits, L.: Linear algebraic proofs of VC-dimension based inequalities. In: Ben-David, S. (ed.) EuroCOLT 1997. LNCS, vol. 1208, pp. 238–250. Springer, Heidelberg (1997)
Haussler, D., Littlestone, N., Warmuth, M.K.: Predicting {0,1} functions on randomly drawn points. Information and Computation 115(2), 284–293 (1994); Was in FOCS 1988, COLT 1988, and Univ. of California at Santa Cruz TR UCSC-CRL-90-54
Langford, J.: Tutorial on practical prediction theory for classification. In: ICML 2003 (2003)
Li, Y., Long, P.M., Srinivasan, A.: The one-inclusion graph algorithm is near optimal for the prediction model of learning. Transaction on Information Theory 47(3), 1257–1261 (2002)
Littlestone, N., Warmuth, M.K.: Relating data compression and learnability. Unpublished manuscript, June 10 (1986), obtainable at http://www.cse.ucsc.edu/~manfred
Marchand, M., Shawe-Taylor, J.: The Set Covering Machine. Journal of Machine Learning Research 3, 723–746 (2002)
Marchand, M., Shawe-Taylor, J.: The Decision List Machine. In: Advances in Neural Information Processing Systems 15, pp. 921–928. MIT-Press, Cambridge (2003)
Sauer, N.: On the density of families of sets. Journal of Combinatorial Theory (A) 13, 145–147 (1972)
Vapnik, V.N.: Estimation of Dependences Based on Empirical Data. Springer, New York (1982)
Vapnik, V.N., Chervonenkis, A.Y.: On the uniform convergence of relative frequencies of events to their probabilities. Theory of Probab. and its Applications 16(2), 264–280 (1971)
von Luxburg, U., Bousquet, O., Schölkopf, B.: A compression approach to support vector model selection. Journal of Machine Learning Research 5, 293–323 (2004)
Warmuth, M.K.: Compressing to VC dimension many points. In: COLT open problems (2003)
Warmuth, M.K.: The optimal PAC algorithm. In: COLT open problems (2004)
Welzl, E.: Complete range spaces. Unpublished notes (1987)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2005 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Kuzmin, D., Warmuth, M.K. (2005). Unlabeled Compression Schemes for Maximum Classes . In: Auer, P., Meir, R. (eds) Learning Theory. COLT 2005. Lecture Notes in Computer Science(), vol 3559. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11503415_40
Download citation
DOI: https://doi.org/10.1007/11503415_40
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-26556-6
Online ISBN: 978-3-540-31892-7
eBook Packages: Computer ScienceComputer Science (R0)