Skip to main content

Probabilistic Inference Trees for Classification and Ranking

  • Conference paper
Advances in Artificial Intelligence (Canadian AI 2006)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 4013))

  • 2834 Accesses

Abstract

In many applications, an accurate ranking of instances is as important as accurate classification. However, it has been observed that traditional decision trees perform well in classification, but poor in ranking. In this paper, we point out that there is an inherent obstacle for traditional decision trees to achieving both accurate classification and ranking. We propose to understand decision trees from probabilistic perspective, and use probability theory to compute probability estimates and perform classification and ranking. The new model is called probabilistic inference trees (PITs). Our experiments show that the PIT learning algorithm performs well in both ranking and classification. More precisely, it significantly outperforms the state-of-the-art decision tree learning algorithms designed for ranking, such as C4.4 [10] and Ling and Yan’s algorithm [6], and performs competitively with the traditional decision tree learning algorithms, such as C4.5, in classification. Our research provides a novel algorithm for the applications in which both accurate classification and ranking are desired.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Bauer, E., Kohavi, R.: An Empirical Comparison of Voting Classification Algorithms: Bagging, Boosting, and Variants. Machine Learning 36(1-2), 105–139 (1999)

    Article  Google Scholar 

  2. Buntine, W.: Learning Classification Trees. Artificial Intelligence Frontiers in Statistics, pp. 182–201. Chapman Hall, London (1991)

    Google Scholar 

  3. Ferri, C., Flach, P.A., Hernndez-Orallo, J.: Improving the AUC of Probabilistic Estimation Trees. In: Proceedings of 14th European Conference on Machine Learning, pp. 121–132. Springer, Heidelberg (2003)

    Google Scholar 

  4. Hand, D.J., Till, R.J.: A simple generalisation of the area under the ROC curve for multiple class classification problems. Machine Learning 45, 171–186 (2001)

    Article  MATH  Google Scholar 

  5. Kohavi, R.: Scaling Up the Accuracy of Naive-Bayes Classifiers: A Decision-Tree Hybrid. In: Proceedings of the Second International Conference on Knowledge Discovery and Data Mining (KDD-1996), pp. 202–207. AAAI Press, Menlo Park (1996)

    Google Scholar 

  6. Ling, C.X., Yan, R.J.: Decision Tree with Better Ranking. In: Proceedings of the 20th International Conference on Machine Learning, pp. 480–487. Morgan Kaufmann, San Francisco (2003)

    Google Scholar 

  7. Pazzani, M., Merz, C., Murphy, P., Ali, K., Hume, T., Brunk, C.: Reducing misclassification costs. In: Proceedings of the 11th International conference on Machine Learning, pp. 217–225. Morgan Kaufmann, San Francisco (1994)

    Google Scholar 

  8. Provost, F., Fawcett, T.: Analysis and visualization of classifier performance: comparison under imprecise class and cost distribution. In: Proceedings of the Third International Conference on Knowledge Discovery and Data Mining, pp. 43–48. AAAI Press, Menlo Park (1997)

    Google Scholar 

  9. Provost, F., Fawcett, T., Kohavi, R.: The case against accuracy estimation for comparing induction algorithms. In: Proceedings of the Fifteenth International Conference on Machine Learning, pp. 445–453. Morgan Kaufmann, San Francisco (1998)

    Google Scholar 

  10. Provost, F.J., Domingos, P.: Tree Induction for Probability-Based Ranking. Machine Learning 52(3), 199–215 (2003)

    Article  MATH  Google Scholar 

  11. Quinlan, J.R.: C4.5: Programs for Machine Learning. Morgan Kaufmann, San Mateo (1993)

    Google Scholar 

  12. Symth, P., Gray, A., Fayyad, U.: Retrofitting decision tree classifiers using kernel density estimation. In: Proceedings of the Twelfth International Conference on Machine Learning, pp. 506–514. Morgan Kaufmann, San Francisco (1996)

    Google Scholar 

  13. Su, J., Zhang, H.: Representing Conditional Independence Using Decision Trees. In: Proceedings of the Twentieth National Conference on Artificial Intelligence (AAAI-2005), pp. 874–879. AAAI Press, Menlo Park (2005)

    Google Scholar 

  14. Witten, I.H., Frank, E.: data mining-Practical Machine Learning Tools and Techniques with Java Implementation. Morgan Kaufmann, San Francisco (2000)

    Google Scholar 

  15. Zhang, H., Su, J.: Conditional Independence Trees. In: Boulicaut, J.-F., Esposito, F., Giannotti, F., Pedreschi, D. (eds.) ECML 2004. LNCS (LNAI), vol. 3201, pp. 513–524. Springer, Heidelberg (2004)

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2006 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Su, J., Zhang, H. (2006). Probabilistic Inference Trees for Classification and Ranking. In: Lamontagne, L., Marchand, M. (eds) Advances in Artificial Intelligence. Canadian AI 2006. Lecture Notes in Computer Science(), vol 4013. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11766247_45

Download citation

  • DOI: https://doi.org/10.1007/11766247_45

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-34628-9

  • Online ISBN: 978-3-540-34630-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics