Skip to main content

AUC: A Better Measure than Accuracy in Comparing Learning Algorithms

  • Conference paper
  • First Online:
Book cover Advances in Artificial Intelligence (Canadian AI 2003)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 2671))

Abstract

Predictive accuracy has been widely used as the main criterion for comparing the predictive ability of classification systems (such as C4.5, neural networks, and Naive Bayes). Most of these classifiers also produce probability estimations of the classification, but they are completely ignored in the accuracy measure. This is often taken for granted because both training and testing sets only provide class labels. In this paper we establish rigourously that, even in this setting, the area under the ROC (Receiver Operating Characteristics) curve, or simply AUC, provides a better measure than accuracy. Our result is quite significant for three reasons. First, we establish, for the first time, rigourous criteria for comparing evaluation measures for learning algorithms. Second, it suggests that AUC should replace accuracy when measuring and comparing classification systems. Third, our result also prompts us to re-evaluate many well-established conclusions based on accuracy in machine learning. For example, it is well accepted in the machine learning community that, in terms of predictive accuracy, Naive Bayes and decision trees are very similar. Using AUC, however, we show experimentally that Naive Bayes is significantly better than the decision-tree learning algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Ling, C. X., Zhang, H.: Toward Bayesian classifiers with accurate probabilities. In: Proceedings of the Sixth Pacific-Asia Conference on KDD (to appear). Springer (2002)

    Google Scholar 

  2. Cohen, W. W., Schapire, R. E., Singer, Y.: Learning to order things. Journal of Artificial Intelligence Research 10 (1999) 243–270

    MathSciNet  MATH  Google Scholar 

  3. Provost, F., Fawcett, T.: Analysis and visualization of classifier performance: comparison under imprecise class and cost distribution. In: Proceedings of the Third International Conference on Knowledge Discovery and Data Mining. AAAI Press (1997) 43–48

    Google Scholar 

  4. Swets, J.: Measuring the accuracy of diagnostic systems. Science 240 (1988) 1285–1293

    Article  MathSciNet  MATH  Google Scholar 

  5. Provost, F., Fawcett, T., Kohavi, R.: The case against accuracy estimation for comparing induction algorithms. In: Proceedings of the Fifteenth International Conference on Machine Learning. Morgan Kaufmann (1998) 445–453

    Google Scholar 

  6. Hand, D. J., Till, R. J.: A simple generalisation of the area under the ROC curve for multiple class classification problems. Machine Learning 45 (2001) 171–186

    Article  MATH  Google Scholar 

  7. Bradley, A.P.: The use of the area under the ROC curve in the evaluation of machine learning algorithms. Pattern Recognition 30 (1997) 1145–1159

    Article  Google Scholar 

  8. Kononenko, I.: Comparison of inductive and naive Bayesian learning approaches to automatic knowledge acquisition. In Wielinga, B., ed.: Current Trends in Knowledge Acquisition. IOS Press (1990)

    Google Scholar 

  9. Langley, P., Iba, W., Thomas, K.: An analysis of Bayesian classifiers. In: Proceedings of the Tenth National Conference of Artificial Intelligence. AAAI Press (1992) 223–228

    Google Scholar 

  10. Domingos, P., Pazzani, M.: Beyond independence: conditions for the optimality of the simple Bayesian classifier. In: Proceedings of the Thirteenth International Conference on Machine Learning. (1996) 105–112

    Google Scholar 

  11. Quinlan, J.: C4.5: Programs for Machine Learning. Morgan Kaufmann: San Mateo, CA (1993)

    Google Scholar 

  12. Provost, F., Domingos, P.: Tree induction for probability-based ranking. Machine Learning (2003) To appear.

    Google Scholar 

  13. Merz, C., Murphy, P., Aha, D.: UCI repository of machine learning databases. In: Dept of ICS, University of California, Irvine. http://www.ics.uci.edu/~mlearn/MLRepository.html (1997)

    Google Scholar 

  14. Fayyad, U., Irani, K.: Multi-interval discretization of continuous-valued attributes for classification learning. In: Proceedings of Thirteenth International Joint Conference on Artificial Intelligence. Morgan Kaufmann (1993) 1022–1027

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2003 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Ling, C.X., Huang, J., Zhang, H. (2003). AUC: A Better Measure than Accuracy in Comparing Learning Algorithms. In: Xiang, Y., Chaib-draa, B. (eds) Advances in Artificial Intelligence. Canadian AI 2003. Lecture Notes in Computer Science, vol 2671. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-44886-1_25

Download citation

  • DOI: https://doi.org/10.1007/3-540-44886-1_25

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-40300-5

  • Online ISBN: 978-3-540-44886-0

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics