Skip to main content

Comparative Testing of Hyper-Planar Classifiers on Continuous Data Domains

  • Conference paper
  • 630 Accesses

Abstract

This paper details a set of comparative tests conducted between five classification algorithms using three real world continuously valued data sets. The algorithms were selected to represent the two most popular classification methods, neural networks and decision trees as well as hybrid algorithms which incorporate features of both techniques. These hybrid algorithms construct an architecture to model the problem domain.

The three real world data sets have previously been used in the StatLog tests [1] and these experiments can be viewed as an extension of this work. Due to the nature of these data sets, each contains some level of noise which affects the learning procedure to varying degrees. A maximum bound on a classifier’s generalisation is discussed, which is due to the loss of information incurred when allowing for noise in a data domain model.

The results of these tests establish the levels of performance which can be achieved using hyperplanic classifiers on noisy continuously valued data sets.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Michie D., Spiegelhalter D.J., Taylor C.C.: Machine Learning, Neural and Statistical Classification, Ellis Hopwood Series in Artificial Intelligence, Ellis Hopwood, 1994.

    Google Scholar 

  2. Bai B. and Farhat N. H.: Learning Networks for Extrapolation and Radar Target Identification, Neural Networks, pp. 507–529, 1992.

    Google Scholar 

  3. Chow M. and Magnum P.: Incipient Fault Detection in DC Machines Using a Neural Network, IEE 22nd Asilomar Conference on Signals, Systems and Computers, Vol. 2, pp. 706–709, 1989.

    Article  Google Scholar 

  4. Rumelhart D., Hinton G., Williams R.: Learning Representations by Back-Propagating Errors, Letters to Nature, Vol. 323, pp. 533–535, 1986.

    Article  Google Scholar 

  5. Hertz J., Krogh A., Palmer R.: Introduction to the Theory of Neural Computation, Sante Fe Institute, Addison Wesley, 1991.

    Google Scholar 

  6. McLean D., Bandar Z., O’Shea J.: The Evolution of a Feed Forward Neural Network trained under Back-Propagation, ICANNGA ‘97, 1997.

    Google Scholar 

  7. Sethi I.K., Sarvaryudu G.P.R.: Hierarchical Classifier Design Using Mutual Information, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol PAMI-4, No. 4, pp. 441–445, 1982.

    Article  Google Scholar 

  8. Sankar A. and Mammone R.J.: Optimal Pruning of Neural Tree Networks for Improved Generalisation. IEEE International Joint Conference on Neural Networks — Seattle, Vol. 2, pp. 219–224, 1991.

    Article  Google Scholar 

  9. Sankar A. and Mammone R.J.: Speaker Independent Vowel Recognition using Neural Tree Networks, Proceedings of the International Joint Conference on Neural Networks, Vol.2, pp. 809–814, 1991.

    Google Scholar 

  10. Sethi I.K: Entropy Nets: From Decision Trees to Neural Networks, Proceedings Of The IEEE, vol. 78, No 10, pp. 1605–1613, 1990.

    Article  Google Scholar 

  11. Sethi I.K and Otten M.: Comparison Between Entropy Net and Decision Tree Classifiers, International Joint Conference on Neural Networks, Vol.3, pp. 63–68, 1990.

    Google Scholar 

  12. McLean D., Bandar Z., O’Shea J.: Improved Interpolation and Extrapolation from Continuous Training Examples Using a New Neuronal Model with an Adaptive Steepness, 2nd Australian and New Zealand Conference on Intelligent Information Systems, IEEE, pp. 125–129, 1994.

    Google Scholar 

  13. McLean D., Bandar Z., O’Shea J:, An Empirical Comparison of Back Proagation and the RDSE Algorithm on Continuously Valued Real World Data, Neural Networks, 11, pp. 1685–1694, 1998.

    Article  Google Scholar 

  14. McLean D.: RDSE Algorithm, http://www.doc.mmu.ac.Uk/STAFF/D.McLean/RDSE, 1998.

  15. Quinlan J.R.: Induction of Decision Trees, Machine Learning, Vol. 1, pp. 81–106, 1986.

    Google Scholar 

  16. Baba N.: A New Approach for Finding the Global Minimum of Error Function of Neural Networks, Neural Networks, Vol. 2, pp. 367–373, 1989.

    Article  Google Scholar 

  17. Lachenbruch P. and Mickey M.: Estimation of Error Rates in Discriminant Analysis, Technometrics, Vol.10, pp. 1–11, 1968.

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 1999 Springer-Verlag Wien

About this paper

Cite this paper

McLean, D., Bandar, Z. (1999). Comparative Testing of Hyper-Planar Classifiers on Continuous Data Domains. In: Artificial Neural Nets and Genetic Algorithms. Springer, Vienna. https://doi.org/10.1007/978-3-7091-6384-9_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-7091-6384-9_4

  • Publisher Name: Springer, Vienna

  • Print ISBN: 978-3-211-83364-3

  • Online ISBN: 978-3-7091-6384-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics