Skip to main content

Hierarchical Design of Fast Minimum Disagreement Algorithms

  • Conference paper
  • First Online:
Algorithmic Learning Theory (ALT 2015)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 9355))

Included in the following conference series:

  • 1257 Accesses

Abstract

We compose a toolbox for the design of Minimum Disagreement algorithms. This box contains general procedures which transform (without much loss of efficiency) algorithms that are successful for some d-dimensional (geometric) concept class \(\mathcal{C}\) into algorithms which are successful for a \((d+1)\)-dimensional extension of \(\mathcal{C}\). An iterative application of these transformations has the potential of starting with a base algorithm for a trivial problem and ending up at a smart algorithm for a non-trivial problem. In order to make this working, it is essential that the algorithms are not proper, i.e., they return a hypothesis that is not necessarily a member of \(\mathcal{C}\). However, the “price” for using a super-class \(\mathcal{H}\) of \(\mathcal{C}\) is so low that the resulting time bound for achieving accuracy \(\varepsilon \) in the model of agnostic learning is significantly smaller than the time bounds achieved by the up to date best (proper) algorithms.

We evaluate the transformation technique for \(d=2\) on both artificial and real-life data sets and demonstrate that it provides a fast algorithm, which can successfully solve practical problems on large data sets.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Auer, P., Holte, R.C., Maass, W.: Theory and applications of agnostic PAC-learning with small decision trees. In: ICML 1995, pp. 21–29 (1995)

    Google Scholar 

  2. Barbay, J., Chan, T.M., Navarro, G., Pérez-Lantero, P.: Maximum-weight planar boxes in \(O(n^2)\) time (and better). Information Processing Letters 114(8), 437–445 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  3. de Berg, M., Cheong, O., van Kreveld, M., Overmars, M.: Computational Geometry: Algorithms and Applications. Springer-Verlag, Santa Clara (2008)

    Google Scholar 

  4. Bock, R., Chilingarian, A., Gaug, M., Hakl, F., Hengstebeck, T., Jiřina, M., Klaschka, J., Kotrč, E., Savický, P., Towers, S., Vaiciulis, A., Wittek, W.: Methods for multidimensional event classification: a case study using images from a cherenkov gamma-ray telescope. Nuclear Instruments and Methods in Physics Research A 516(2–3), 511–528 (2004)

    Article  Google Scholar 

  5. Cortés, C., Díaz-Báñez, J.M., Pérez-Lantero, P., Seara, C., Urrutia, J., Ventura, I.: Bichromatic separability with two boxes: A general approach. Journal of Algorithms 64(2–3), 79–88 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  6. Dvořák, J., Savický, P.: Softening splits in decision trees using simulated annealing. In: Beliczynski, B., Dzielinski, A., Iwanowski, M., Ribeiro, B. (eds.) ICANNGA 2007. LNCS, vol. 4431, pp. 721–729. Springer, Heidelberg (2007)

    Chapter  Google Scholar 

  7. Evett, I.W., Spiehler, E.J.: Rule induction in forensic science. Tech. rep, Central Research Establishment, Home Office Forensic Science Service (1987)

    Google Scholar 

  8. Freund, Y., Schapire, R.E.: A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences 55(1), 119–139 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  9. Holte, R.C.: Very simple classification rules perform well on most commonly used datasets. Machine Learning 11(1), 63–91 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  10. Kearns, M.J., Schapire, R.E., Sellie, L.M.: Toward efficient agnostic learning. Machine Learning 17(2), 115–141 (1994)

    MATH  Google Scholar 

  11. Maass, W.: Efficient agnostic PAC-learning with simple hypothesis. In: COLT 1994, pp. 67–75 (1994)

    Google Scholar 

  12. Pitt, L., Valiant, L.G.: Computational limitations on learning from examples. Journal of the Association on Computing Machinery 35(4), 965–984 (1988)

    Article  MathSciNet  MATH  Google Scholar 

  13. Shalev-Shwartz, S., Ben-David, S.: Understanding Machine Learning: From Theory to Algorithms. Cambridge University Press (2014)

    Google Scholar 

  14. Vapnik, V.: Statistical learning theory. Wiley & Sons (1998)

    Google Scholar 

  15. Vapnik, V.N., Chervonenkis, A.Y.: On the uniform convergence of relative frequencies of events to their probabilities. Theory of Probability and its Applications XVI(2), 264–280 (1971)

    Google Scholar 

  16. Weiss, S.M., Galen, R.S., Tadepalli, P.: Maximizing the predictive value of production rules. Artificial Intelligence 45(1–2), 47–71 (1990)

    Article  Google Scholar 

  17. Weiss, S.M., Kapouleas, I.: An empirical comparison of pattern recognition, neural nets, and machine learning classification methods. In: IJCAI 1989, pp. 781–787 (1989)

    Google Scholar 

  18. Weiss, S.M., Kulikowski, C.A.: Computer Systems That Learn: Classification and Prediction Methods from Statistics, Neural Nets, Machine Learning and Expert Systems. Morgan Kaufmann (1990)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Christoph Ries .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this paper

Cite this paper

Darnstädt, M., Ries, C., Simon, H.U. (2015). Hierarchical Design of Fast Minimum Disagreement Algorithms. In: Chaudhuri, K., GENTILE, C., Zilles, S. (eds) Algorithmic Learning Theory. ALT 2015. Lecture Notes in Computer Science(), vol 9355. Springer, Cham. https://doi.org/10.1007/978-3-319-24486-0_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-24486-0_9

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-24485-3

  • Online ISBN: 978-3-319-24486-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics