Skip to main content

A Comparison of Supervised Learning Techniques for Clustering

  • Conference paper
  • First Online:
Neural Information Processing (ICONIP 2015)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 9489))

Included in the following conference series:

  • 2101 Accesses

Abstract

The significance of data mining has experienced dramatic growth over the past few years. This growth has been so drastic that many industries and academic disciplines apply data mining in some form. Data mining is a broad subject that encompasses several topics and problems; however this paper will focus on the supervised learning classification problem and discovering ways to optimize the classification process. Four classification techniques (naive Bayes, support vector machine, decision tree, and random forest) were studied and applied to data sets from the UCI Machine Learning Repository. A Classification Learning Toolbox (CLT) was developed using the R statistical programming language to analyze the date sets and report the relationships and prediction accuracy between the four classifiers.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Tan, P., Steinbach, M., Kumar, V.: Introduction to Data Mining. Pearson Addison Wesley, Boston (2006)

    Google Scholar 

  2. Khoury, M.J.: Public health approach to big data in the age of genomics: how can we separate signal from noise? Centers for Disease Control and Prevention (2014). http://blogs.cdc.gov/genomics/2014/10/30/public-health-approach/

  3. Donalek, C.: Supervised and Unsupervised Learning (2011). http://www.astro.caltech.edu/~george/aybi199/Donalek_Classif.pdf

  4. An Introduction to Bayes Theorem. Bayes Theorem: Introduction. http://www.trinity.edu/cbrown/bayesweb/

  5. Leyton-Brown, K.: Reasoning Under Uncertainty: Marginal and Conditional Independence. http://www.cs.ubc.ca/~kevinlb/teaching/cs322%20-%202006-7/Lectures/lect25.pdf

  6. Investopedia US, A Division of IAC.http://www.investopedia.com/terms/p/posterior-probability.asp

  7. Sayad, S.: An Introduction to Data Mining (2015). http://www.saedsayad.com/

  8. Ho, T.: The random subspace method for constructing decision forests. IEEE Trans. Pattern Anal. Mach. Intell. 20(8), 832–844 (1998)

    Article  Google Scholar 

  9. Lichman, M.: UCI Machine Learning Repository (2013). http://archive.ics.uci.edu/ml

  10. Little, M.A., McSharry, P.E., Roberts, S.J., Costaello, D., Moroz, I.: Exploiting nonlinear recurrence and fractal scaling properties for voice disorder detection. BioMed. Eng. Online 6(23) (2007). doi:10.1186/1475-925X-6-23

  11. Yeh, I., Yang, K.J., Ting, T.: Knowledge discovery on RFM model using Bernoulli sequence. Expert Syst. Appl. 36, 5866–5871 (2008)

    Article  Google Scholar 

  12. Meyer, D., Dimitriadou, E., Hornik, K., Weingessel, A., Leisch, F.: Misc Functions of the Department of Statistics (e1071). Institute for Statistics and Mathematics of WU, TU Wien (2014). http://cran.r-project.org/web/packages/e1071/e1071.pdf

  13. Cutler, A., Breiman, L., Liaw A., Wiener, M.: RandomForest: Breiman and Cutler’s Random Forests for Classification and Regression. Institute for Statistics and Mathematics (2015). http://cran.r-project.org/web/packages/randomForest/randomForest.pdf

  14. Khun, M., Wing, J., Weston, S., Williams, A., Keefer, C., Engelhardt, A., Cooper, T., Mayer, Z., Kenkel, B., Benesty, M., Lescarbeau, R., Ziem, A., Scrucca, L.: Classification and Regression Training. Institute for Statistics and Mathematics (2015). http://cran.r-project.org/web/packages/caret/caret.pdf

  15. Therneau, T., Atkinson, B., Ripley, B.: Recursive partitioning for classification, regression and survival trees. An implementation of most of the functionality of the 1984 book by Breiman, Friedman, Olshen and Stone. Institute for Statistics and Mathematics (2015). http://cran.r-project.org/web/packages/rpart/rpart.pdf

  16. Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: SMOTE: synthetic minority over-sampling technique. J. Artif. Intell. 16, 321–357 (2002)

    MATH  Google Scholar 

  17. Hamilton, H.: Confusion Matrix (2012). http://www2.cs.uregina.ca/~ dbd/cs831/notes/confusion_matrix/confusion_matrix.html

Download references

Acknowledgement

This work was supported via Grant provided by the Rowan University Mathematics Department.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to William Ezekiel or Umashanger Thayasivam .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this paper

Cite this paper

Ezekiel, W., Thayasivam, U. (2015). A Comparison of Supervised Learning Techniques for Clustering. In: Arik, S., Huang, T., Lai, W., Liu, Q. (eds) Neural Information Processing. ICONIP 2015. Lecture Notes in Computer Science(), vol 9489. Springer, Cham. https://doi.org/10.1007/978-3-319-26532-2_52

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-26532-2_52

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-26531-5

  • Online ISBN: 978-3-319-26532-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics