Skip to main content

Sifting the Margin – An Iterative Empirical Classification Scheme

  • Conference paper
PRICAI 2004: Trends in Artificial Intelligence (PRICAI 2004)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 3157))

Included in the following conference series:

Abstract

Attribute or feature selection is an important step in designing a classifier. It often reduces to choosing between computationally simple schemes (based on a small subset of attributes) that do not search the space and more complex schemes (large subset or entire set of available attributes) that are computationally intractable. Usually a compromise is reached: A computationally tractable scheme that relies on a subset of attributes that optimize a certain criterion is chosen. The result is usually a’good’ sub-optimal solution that may still require a fair amount of computation. This paper presents an approach that does not commit itself to any particular subset of the available attributes. Instead, the classifier uses each attribute successively as needed to classify a given data point. If the data set is separable in the given attribute space the algorithm will classify a given point with no errors. The resulting classifier is transparent, and the approach compares favorably with previous approaches both in accuracy and efficiency.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Merz, C.J., Murphy, P.M.: UC1 repository of machine learning databases. University of California, Irvine, Department of Information & Computer Sciences (1998)

    Google Scholar 

  2. Caruana, R., de Sa, V.R.: Benefitting from the Variables that Variable Selection Discards. Journal of Machine Learning Research, 1245–1264 (2003); Ito, T., Abadi, M. (eds.): TACS 1997. LNCS, vol. 1281, pp. 415–438. Springer, Heidelberg (1997)

    Google Scholar 

  3. Guyon, I., ElisseetT, A.: An Introduction to Variable and Feature Selection. JMLR Special Issue, 1157-1182, GGGH (2003)

    Google Scholar 

  4. Narazaki, Ralescu, A.L.: Iterative Induction of a Category Membership Function. International Journal of Uncertainty, Fuzziness, and Knowledge-Based Systems 2(1), 91–100 (1992)

    Article  MathSciNet  Google Scholar 

  5. Stoppiglia, H., Dreyfus, G., et al.: Ranking a Random Variable for Variable and Feature Selection. JMLR Special Issue, 1399–1414 (2003)

    Google Scholar 

  6. John, G.: Cross-Validated C4.5: Using Error Estimation for Automatic Parameter Selection. Technical Report STAN-CS-TN-94-12, CS Department. Stanford University (October 1994)

    Google Scholar 

  7. Zheng, Z.: A Benchmark for Classifier Learning. In: Technical Report 474, Proceedings of the 6th Australian Joint Conference on Al, pp. 281–286. World Scientific, Singapore (1993)

    Google Scholar 

  8. Setiono, R.: Extracting Rules from Pruned Neural Networks for Breast Cancer Diagnosis. Artificial Intelligence in Medicine 8(1), 37–51 (1996)

    Article  Google Scholar 

  9. Taha, J., Ghosh, J.: Evaluation and Ordering of Rules Extracted from Forwardfccd Networks. In: Proceedings of the IEEE International Conference on Neural Networks, pp. 221–226 (1997)

    Google Scholar 

  10. Tsakonas, A., Dounias, G.: A Scheme for the Evolution of Feedforward Networks using BNF-Grammar Driven Genetic Programming. In: Proceedings EUNITE 2002, Algarve, Portugal (2002)

    Google Scholar 

  11. Ju, W., et al.: On Bayesian Learning of Sparse Classifiers (September 2003)

    Google Scholar 

  12. Pina-Reyes, C., Sipper, M.: Evolving Fuzzy Rules for Breast Cancer Diagnosis. In: Proceedings of 1998 International Symposium on Nonlinear Theory and Applications (NOLTA 1998), Lausanne, vol. 2, pp. 369–372 (1998)

    Google Scholar 

  13. Yau, P., et al.: Bayesian Variable Selection and Model Averaging in High Dimensional Multinomial Nonparametric Regression. Journal of Computational and Graphical Statistics 12(1), 23–32 (2002)

    Article  Google Scholar 

  14. Bahler, D., Navarro, L.: Combining Heterogeneous Sets of Classifiers: Theoretical and Experimental Comparison of Methods. In: 17th National Conference on Artificial Intelligence (AAAI 2000), Workshop on New Research Problems for Machine Learning (2000)

    Google Scholar 

  15. Aguilar, J., Riquelme, J., Toro, M.: Data Set Editing by Ordered Projection. In: 14th European Conference on Artificial Intelligence (August 2000)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2004 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Vance, D., Ralescu, A. (2004). Sifting the Margin – An Iterative Empirical Classification Scheme. In: Zhang, C., W. Guesgen, H., Yeap, WK. (eds) PRICAI 2004: Trends in Artificial Intelligence. PRICAI 2004. Lecture Notes in Computer Science(), vol 3157. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-28633-2_22

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-28633-2_22

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-22817-2

  • Online ISBN: 978-3-540-28633-2

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics