Skip to main content
Log in

Attribute weighting for averaged one-dependence estimators

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

Averaged one-dependence estimators (AODE) is a type of supervised learning algorithm that relaxes the conditional independence assumption that governs standard naïve Bayes learning algorithms. AODE has demonstrated reasonable improvement in terms of classification performance when compared with a naïve Bayes learner. However, AODE does not consider the relationships between the super-parent attribute and other normal attributes. In this paper, we propose a novel method based on AODE that weighs the relationship between the attributes called weighted AODE (WAODE), which is an attribute weighting method that uses the conditional mutual information metric to rank the relations among the attributes. We have conducted experiments on University of California, Irvine (UCI) benchmark datasets and compared accuracies between AODE and our proposed learner. The experimental results in our paper show that WAODE exhibits higher accuracy performance than the original AODE.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  1. Bache K, Lichman M (2013) UCI machine learning repository

  2. Cooper GF (1990) The computational complexity of probabilistic inference using Bayesian belief networks. Artif Intell 42(2):393–405

    Article  MathSciNet  MATH  Google Scholar 

  3. Dai Q (2013) A competitive ensemble pruning approach based on cross-validation technique. Knowl-Based Syst 37:394–414

    Article  Google Scholar 

  4. Dai Q, Zhang T, Liu N (2015) A new reverse reduce-error ensemble pruning algorithm. Appl Soft Comput 28:237–249

    Article  Google Scholar 

  5. Fawcett T (2006) An introduction to ROC analysis. Pattern Recogn Lett 27(8):861–874

    Article  MathSciNet  Google Scholar 

  6. Friedman N, Geiger D, Goldszmidt M (1997) Bayesian network classifiers. Mach Learn 29(2–3):131–163

    Article  MATH  Google Scholar 

  7. Hand DJ (2009) Measuring classifier performance: a coherent alternative to the area under the ROC curve. Mach Learn 77(1):103–123

    Article  Google Scholar 

  8. Heckerman D (2008) A tutorial on learning with Bayesian networks. In: Innovations in Bayesian networks. Springer, pp 33–82

  9. Jiang L, Zhang H (2006) Weightily averaged one-dependence estimators. In: The 9th pacific rim international conference on artificial intelligence (PRICAI 2006: trends in artificial intelligence). Springer, pp 970–974

  10. Jiang L, Zhang H, Cai Z (2009) A novel Bayes model: hidden naive bayes. IEEE Trans Knowl Data Eng 21(10):1361–1371

    Article  Google Scholar 

  11. Kohavi R (1996) Scaling up the accuracy of naive-Bayes classifiers: a decision-tree hybrid. In: Proceedings of the 2nd international conference on knowledge discovery and data mining (KDD-96), pp 202–207

  12. Kohavi Ron, Wolpert David H. (1996) Bias plus variance decomposition for zero-one loss functions. In: Saitta L (ed) Proceedings of the 13th international conference machine learning. Morgan Kaufmann, pp 275–283

  13. Lee C-H, Gutierrez F, Dou D (2011) Calculating feature weights in naive Bayes with kullback-leibler measure. In: Proceedings of the 11th IEEE international conference on data mining (ICDM 2011). IEEE, pp 1146–1151

  14. Lewis DD (1998) Naive (bayes) at forty: The independence assumption in information retrieval. In: Proceedings of the 10th European conference on machine learning. ECML ’98. ISBN 3-540-64417-2. Springer, London, pp 4–15

  15. Lobo JM, Jiménez-Valverde A, Real R (2007) AUC: a misleading measure of the performance of predictive distribution models. Glob Ecol Biogeogr 17(2)

  16. Orhan U, Adem K, Comert O (2012) Least squares approach to locally weighted naive Bayes method. J New Results Sci 1(1)

  17. Partalas I, Tsoumakas G, Vlahavas I (2010) An ensemble uncertainty aware measure for directed hill climbing ensemble pruning. Mach Learn 81(3):257–282

    Article  MathSciNet  Google Scholar 

  18. Webb GI, Conilione P (2002) Estimating bias and variance from data. School of Computer Science and Software Engineering, Victoria

    Google Scholar 

  19. Webb GI, Boughton JR, Wang Z (2005) Not so naive Bayes: aggregating one-dependence estimators. Mach Learn 58(1):5– 24

    Article  MATH  Google Scholar 

  20. Zaidi NA, Cerquides J, Carman MJ, Webb GI (2013) Alleviating naive Bayes attribute independence assumption by attribute weighting. J Mach Learn Res 14(1):1947–1988

    MathSciNet  MATH  Google Scholar 

  21. Zheng F, Webb GI (2006) Efficient lazy elimination for averaged one-dependence estimators. In: Proceedings of the 23rd international conference on machine learning. ACM, pp 1113–1120

Download references

Acknowledgment

This work was supported by the Korea Creative Content Agency (KOCCA) grant funded by the Korea government (no. R2015090021).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dae-Ki Kang.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xiang, ZL., Kang, DK. Attribute weighting for averaged one-dependence estimators. Appl Intell 46, 616–629 (2017). https://doi.org/10.1007/s10489-016-0854-3

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-016-0854-3

Keywords

Navigation