Skip to main content
Log in

Ensemble of classifier chains and Credal C4.5 for solving multi-label classification

  • Regular Paper
  • Published:
Progress in Artificial Intelligence Aims and scope Submit manuscript

Abstract

In this work, we have considered the ensemble of classifier chains (ECC) algorithm in order to solve the multi-label classification (MLC) task. It starts from binary relevance algorithm (BR), a simple and direct approach to MLC that has been shown to provide good results in practice. Nevertheless, unlike BR, ECC aims to exploit the correlations between labels. ECC uses an algorithm of traditional supervised classification in order to approach the binary problems. Within this field, Credal C4.5 (CC4.5) is a new version of the well-known C4.5 algorithm that uses imprecise probabilities in order to estimate the probability distribution of the class variable. This new version of C4.5 algorithm has been shown to provide better performance when noisy datasets are classified. In MLC, the intrinsic noise might be higher than in traditional supervised classification. The reason is very simple: in MLC, there are multiple labels, whereas in traditional classification there is just a class variable. Thus, there is more probability of error for an instance. For the previous reasons, the performance of ECC with CC4.5 as base classifier is studied in this work. We have carried out an extensive experimental analysis with several multi-label datasets, different noise levels and a large number of evaluation metrics for MLC. This experimental study has shown that, generally, ECC has better performance with CC4.5 as base classifier than using C4.5. The higher is the label noise level introduced in the data, the more significative is this improvement. Therefore, it is probably suitable to use imprecise probabilities in Decision Trees within MLC.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

References

  1. Abellán, J.: Uncertainty measures on probability intervals from the imprecise dirichlet model. Int. J. Gen. Syst. 35(5), 509–528 (2006). https://doi.org/10.1080/03081070600687643

    Article  MathSciNet  MATH  Google Scholar 

  2. Abellán, J.: Ensembles of decision trees based on imprecise probabilities and uncertainty measures. Inf. Fusion 14(4), 423–430 (2013)

    Article  Google Scholar 

  3. Abellán, J., Mantas, C.J.: Improving experimental studies about ensembles of classifiers for bankruptcy prediction and credit scoring. Expert Syst. Appl. 41(8), 3825–3830 (2014). https://doi.org/10.1016/j.eswa.2013.12.003

    Article  Google Scholar 

  4. Abellán, J., Masegosa, A.: An experimental study about simple decision trees for bagging ensemble on datasets with classification noise. In: Sossai, C., Chemello, G. (eds.) Symbolic and Quantitative Approaches to Reasoning with Uncertainty, vol. 5590, pp. 446–456. Springer, Berlin (2009). https://doi.org/10.1007/978-3-642-02906-6_39

    Chapter  Google Scholar 

  5. Abellán, J., Moral, S.: Building classification trees using the total uncertainty criterion. Int. J. Intell. Syst. 18(12), 1215–1225 (2003). https://doi.org/10.1002/int.10143

    Article  MATH  Google Scholar 

  6. Alves, R.T., Delgado, M.R., Freitas, A.A.: Knowledge discovery with artificial immune systems for hierarchical multi-label classification of protein functions. In: International Conference on Fuzzy Systems, pp. 1–8 (2010). https://doi.org/10.1109/FUZZY.2010.5584298

  7. Barutcuoglu, Z., Schapire, R.E., Troyanskaya, O.G.: Hierarchical multi-label prediction of gene function. Bioinformatics 22(7), 830–836 (2006). https://doi.org/10.1093/bioinformatics/btk048

    Article  Google Scholar 

  8. Boutell, M.R., Luo, J., Shen, X., Brown, C.M.: Learning multi-label scene classification. Pattern Recognit. 37(9), 1757–1771 (2004). https://doi.org/10.1016/j.patcog.2004.03.009

    Article  Google Scholar 

  9. Briggs, F., Huang, Y., Raich, R., Eftaxias, K., Lei, Z., Cukierski, W., Hadley, S.F., Hadley, A., Betts, M., Fern, X.Z., Irvine, J., Neal, L., Thomas, A., Fodor, G., Tsoumakas, G., Ng, H.W., Nguyen, T.N.T., Huttunen, H., Ruusuvuori, P., Manninen, T., Diment, A., Virtanen, T., Marzat, J., Defretin, J., Callender, D., Hurlburt, C., Larrey, K., Milakov, M.: The 9th annual MLSP competition: new methods for acoustic classification of multiple simultaneous bird species in a noisy environment. In: 2013 IEEE International Workshop on Machine Learning for Signal Processing (MLSP), pp. 1–8 (2013). https://doi.org/10.1109/MLSP.2013.6661934

  10. Charte, D., Charte, F., García, S., Herrera, F.: A snapshot on nonstandard supervised learning problems: taxonomy, relationships, problem transformations and algorithm adaptations. Prog. Artif. Intell. (2019). https://doi.org/10.1007/s13748-018-00167-7. (in press)

  11. Charte, F., Rivera, A., del Jesus, M., Herrera, F.: Multilabel Classification: Problem Analysis, Metrics and Techniques. Springer, Berlin (2016)

    Google Scholar 

  12. Charte, F., Rivera, A.J., Charte, D., del Jesus, M.J., Herrera, F.: Tips, guidelines and tools for managing multi-label datasets: the mldr. datasets R package and the Cometa data repository. Neurocomputing (2018). https://doi.org/10.1016/j.neucom.2018.02.011. (In Press)

  13. Charte, F., Rivera, A.J., del Jesus, M.J., Herrera, F.: Addressing imbalance in multilabel classification: measures and random resampling algorithms. Neurocomputing 163, 3–16 (2015). https://doi.org/10.1016/j.neucom.2014.08.091. (Recent advancements in hybrid artificial intelligence systems and its application to real-world problems progress in intelligent systems mining humanistic data)

    Article  Google Scholar 

  14. Clare, A., King, R.D.: Knowledge discovery in multi-label phenotype data. In: De Raedt, L., Siebes, A. (eds.) Principles of Data Mining and Knowledge Discovery, pp. 42–53. Springer, Berlin (2001)

    Chapter  Google Scholar 

  15. Demšar, J.: Statistical comparisons of classifiers over multiple data sets. J. Mach. Learn. Res. 7, 1–30 (2006)

    MathSciNet  MATH  Google Scholar 

  16. Diplaris, S., Tsoumakas, G., Mitkas, P.A., Vlahavas, I.: Protein classification with multiple algorithms. In: Bozanis, P., Houstis, E.N. (eds.) Advances in Informatics, pp. 448–456. Springer, Berlin (2005)

    Chapter  Google Scholar 

  17. Duygulu, P., Barnard, K., de Freitas, J.F.G., Forsyth, D.A.: Object recognition as machine translation: learning a lexicon for a fixed image vocabulary. In: Heyden, A., Sparr, G., Nielsen, M., Johansen, P. (eds.) Computer Vision—ECCV 2002, pp. 97–112. Springer, Berlin (2002)

  18. Elisseeff, A. Weston, J.: A kernel method for multi-labelled classification. In: Advances in Neural Information Processing Systems 14, vol. 14, pp. 681–687 (2001). https://dl.acm.org/citation.cfm?id=2980539.2980628

  19. Fürnkranz, J., Hüllermeier, E., Loza Mencía, E., Brinker, K.: Multilabel classification via calibrated label ranking. Mach. Learn. 73, 133–153 (2008). https://doi.org/10.1007/s10994-008-5064-8

    Article  Google Scholar 

  20. Ghamrawi, N., McCallum, A.: Collective multi-label classification. In: Proceedings of the 14th ACM International Conference on Information and Knowledge Management, pp. 195–200. ACM (2005). https://doi.org/10.1145/1099554.1099591

  21. Gibaja, E., Ventura, S.: A tutorial on multilabel learning. ACM Comput. Surv. 47(3), 52:1–52:38 (2015). https://doi.org/10.1145/2716262

    Article  Google Scholar 

  22. Godbole, S., Sarawagi, S.: Discriminative methods for multi-labeled classification. In: Advances in Knowledge Discovery and Data Mining, pp. 22–30. Springer, Berlin (2004). https://doi.org/10.1007/978-3-540-24775-3_5

  23. Ioannou, M., Sakkas, G., Tsoumakas, G., Vlahavas, I.: Obtaining bipartitions from score vectors for multi-label classification. In: 2010 22nd IEEE International Conference on Tools with Artificial Intelligence, vol. 1, pp. 409–416 (2010)

  24. Katakis, I., Tsoumakas, G., Vlahavas, I.: Multilabel text classification for automated tag suggestion. In: Proceedings of the ECML/PKDD 2008 Discovery Challenge (2008)

  25. Klimt, B., Yang, Y.: The enron corpus: a new dataset for email classification research. In: Boulicaut, J.-F., Esposito, F., Giannotti, F., Pedreschi, D. (eds.) Machine Learning: ECML 2004, pp. 217–226. Springer, Berlin (2004)

  26. Klir, G.J.: Uncertainty and Information: Foundations of Generalized Information Theory. Wiley, New York (2005). https://doi.org/10.1002/0471755575

    Book  MATH  Google Scholar 

  27. Madjarov, G., Kocev, D., Gjorgjevikj, D., Džeroski, S.: An extensive experimental comparison of methods for multi-label learning. Pattern Recognit. 45(9), 3084–3104 (2012). https://doi.org/10.1016/j.patcog.2012.03.004

    Article  Google Scholar 

  28. Mantas, C.J., Abellán, J.: Credal-C4.5: decision tree based on imprecise probabilities to classify noisy data. Expert Syst. Appl. 41(10), 4625–4637 (2014). https://doi.org/10.1016/j.eswa.2014.01.017

    Article  Google Scholar 

  29. Mantas, C.J., Abellán, J., Castellano, J.G.: Analysis of Credal-C4.5 for classification in noisy domains. Expert Syst. Appl. 61, 314–326 (2016). https://doi.org/10.1016/j.eswa.2016.05.035

    Article  Google Scholar 

  30. McCallum, A. (1999). Multi-label text classification with a mixture model trained by EM. In: AAAI’99 Workshop on Text Learning, pp. 1–7

  31. Nasierding, G., Kouzani, A.: Image to text translation by multi-label classification. In: Advanced Intelligent Computing Theories and Applications. With Aspects of Artificial Intelligence, vol. 6216, pp. 247–254. Springer, Berlin (2010). https://doi.org/10.1007/978-3-642-14932-0_31

  32. Pestian, J.P., Brew, C., Matykiewicz, P., Hovermale, D.J., Johnson, N., Cohen, K.B., Duch, W.: A shared task involving multi-label classification of clinical free text. In: Proceedings of the Workshop on BioNLP 2007: Biological, Translational, and Clinical Language Processing, pp. 97–104. Association for Computational Linguistics (2007)

  33. Quinlan, J.R.: C4.5: Programs for Machine Learning. Morgan Kaufmann Publishers Inc., San Francisco (1993)

    Google Scholar 

  34. R Core Team: R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria (2013). http://www.R-project.org/

  35. Read, J., Pfahringer, B., Holmes, G., Frank, E.: Classifier chains for multi-label classification. Mach. Learn. 85(3), 333 (2011). https://doi.org/10.1007/s10994-011-5256-5

    Article  MathSciNet  Google Scholar 

  36. Schapire, R.E., Singer, Y.: Boostexter: a boosting-based system for text categorization. Mach. Learn. 39(2), 135–168 (2000). https://doi.org/10.1023/A:1007649029923

    Article  MATH  Google Scholar 

  37. Shannon, C.E.: A mathematical theory of communication. Bell Syst. Tech. J. 27(3), 379–423 (1948). https://doi.org/10.1002/j.1538-7305.1948.tb01338.x

    Article  MathSciNet  MATH  Google Scholar 

  38. Snoek, C.G.M., Worring, M., van Gemert, J.C., Geusebroek, J.-M., Smeulders, A.W.M.: The challenge problem for automated detection of 101 semantic concepts in multimedia. In: Proceedings of the 14th ACM International Conference on Multimedia, pp. 421–430. ACM (2006). https://doi.org/10.1145/1180639.1180727

  39. Sousa, R., Gama, J.: Multi-label classification from high-speed data streams with adaptive model rules and random rules. Prog. Artif. Intell. 7(3), 177–187 (2018). https://doi.org/10.1007/s13748-018-0142-z

    Article  Google Scholar 

  40. Trohidis, K., Tsoumakas, G., Kalliris, G., Vlahavas, I.P.: Multi-label classification of music into emotions. In: ISMIR, vol. 8, pp. 325–330 (2008)

  41. Tsoumakas, G., Katakis, I., Vlahavas, I.: Effective and efficient multilabel classification in domains with large number of labels. In: Proc. ECML/PKDD 2008 Workshop on Mining Multidimensional Data (MMD’08), pp. 30–44 (2008)

  42. Tsoumakas, G., Spyromitros-Xioufis, E., Vilcek, J., Vlahavas, I.: Mulan: a java library for multi-label learning. J. Mach. Learn. Res. 12, 2411–2414 (2011)

    MathSciNet  MATH  Google Scholar 

  43. Tsoumakas, G. Vlahavas, I.: Random k-labelsets: an ensemble method for multilabel classification. In: European Conference on Machine Learning, pp. 406–417. Springer (2007). https://doi.org/10.1007/978-3-540-74958-5_38

  44. Turnbull, D., Barrington, L., Torres, D., Lanckriet, G.: Semantic annotation and retrieval of music and sound effects. IEEE Trans. Audio Speech Lang. Process. 16(2), 467–476 (2008). https://doi.org/10.1109/TASL.2007.913750

    Article  Google Scholar 

  45. Walley, P.: Inferences from multinomial data: learning about a bag of marbles (with discussion). J. R. Stat. Soc. Ser. B (Methodological) 58(1), 3–57 (1996). https://doi.org/10.2307/2346164

    MathSciNet  MATH  Google Scholar 

  46. Wilcoxon, F.: Individual comparisons by ranking methods. Biom. Bull. 1(6), 80–83 (1945). https://doi.org/10.2307/3001968

    Article  Google Scholar 

  47. Witten, I.H., Frank, E.: Data Mining: Practical Machine Learning Tools and Techniques, 2nd edn. Morgan Kaufmann Publishers Inc., San Francisco (2005)

    MATH  Google Scholar 

  48. Zhang, M.-L., Zhou, Z.-H.: Multilabel neural networks with applications to functional genomics and text categorization. IEEE Trans. Knowl. Data Eng. 18(10), 1338–1351 (2006). https://doi.org/10.1109/TKDE.2006.162

    Article  Google Scholar 

  49. Zhang, M.L., Zhou, Z.H.: A review on multi-label learning algorithms. IEEE Trans. Knowl. Data Eng. 26(8), 1819–1837 (2014). https://doi.org/10.1109/TKDE.2013.39

    Article  Google Scholar 

Download references

Acknowledgements

This work has been supported by the Spanish “Ministerio de Economía y Competitividad” and by “Fondo Europeo de Desarrollo Regional” (FEDER) under Project TEC2015-69496-R.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Carlos J. Mantas.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A: Evaluation measures

In this Appendix, the measures used in order to compare the algorithms of the experiments are explained. We suppose that we have a test set of N instances \(\left\{ \mathbf{x }_\mathbf{1 }, \ldots , \mathbf{x }_\mathbf{N }\right\} \). The same notation of Sect. 2 is used.

Example-based measures

Example-based metrics [20, 22, 36] evaluate the predictions made in each one of the test instances.

  • Subset accuracy It is a very strict metric that measures the proportion of examples whose predicted set of labels are exactly its set of relevant tags:

    $$\begin{aligned} \text {Subset}\_\text {Accuracy}(h) = \frac{1}{N}\sum _{i=1}^{N}I(h(\mathbf{x }_\mathbf{i }) = \mathbf{L }_i) \end{aligned}$$
    (21)
  • Hamming loss It measures how many times an example-label is classified incorrectly, i.e, how many times an irrelevant label is predicted or a relevant label is not predicted on average:

    $$\begin{aligned} \text {Hamming}\_\text {Loss}(h) = \frac{1}{N}\sum _{i=1}^{N}\frac{1}{q}\left| h(\mathbf{x }_i)\triangle \mathbf{L }_i\right| \end{aligned}$$
    (22)

    where \(\triangle \) denote the symmetric difference between two sets, i.e the number of elements that belongs to one set and not to the other one.

  • Accuracy The example-based accuracy is defined by the average Jaccard similarity coefficients between the predicted label sets and the relevant label sets over the examples. Formally:

    $$\begin{aligned} \text {Accuracy}(h) = \frac{1}{N}\sum _{i=1}^{N}\frac{\left| h(\mathbf{x }_i) \cap \mathbf{L }_i\right| }{\left| h(\mathbf{x }_i) \cup \mathbf{L }_i\right| } \end{aligned}$$
    (23)
  • Precision It is defined by the average of the proportion of real labels of the instances that are predicted:

    $$\begin{aligned} \text {Precision}(h) = \frac{1}{N}\sum _{i=1}^{N}\frac{\left| h(\mathbf{x }_i) \cap \mathbf{L }_i\right| }{\left| \mathbf{L }_i\right| } \end{aligned}$$
    (24)
  • Recall It measures the average of the predicted labels that are really relevant for the instances:

    $$\begin{aligned} \text {Recall}(h) = \frac{1}{N}\sum _{i=1}^{N}\frac{\left| h(\mathbf{x }_i) \cap \mathbf{L }_i\right| }{\left| h(\mathbf{x }_i)\right| } \end{aligned}$$
    (25)
  • F1-score It is the harmonic mean between Precision and Recall:

    $$\begin{aligned} F1(h) = \frac{1}{N}\sum _{i=1}^{N}\frac{ 2 \times \left| h(\mathbf{x }_i) \cap \mathbf{L }_i\right| }{\left| h(\mathbf{x }_i)\right| + \left| \mathbf{L }_i\right| } \end{aligned}$$
    (26)

Label-based measures

These measures [43] consider each label as a binary class, which contains the value 1 for an instance if it has associated that label and 0 otherwise.

  • Macro Precision The Macro Precision is the Precision averaged across all labels:

    $$\begin{aligned} \text {Macro}\_\text {Precision} = \frac{1}{q}\sum _{j = 1}^{q}\frac{tp_j}{tp_j + fp_j} \end{aligned}$$
    (27)

    being \(tp_j\) and \(fp_j\) the number of true positives and false positives respectively for the label j, \(1 \le j \le q\).

  • Macro-Recall It is the Recall averaged across all labels:

    $$\begin{aligned} \text {Macro}\_\text {Recall} = \frac{1}{q}\sum _{j = 1}^{q}\frac{tp_j}{tp_j + fn_j} \end{aligned}$$
    (28)

    where \(fn_j\) is the number of false negatives for the jth label.

  • Macro F1 It measures the harmonic mean between Precision and Recall, calculated for each label and averaging over all labels.

    $$\begin{aligned} \text {Macro}\_F1 = \frac{1}{q}\sum _{j = 1}^{q}\frac{2 \times r_j \times p_j}{r_j + p_j} \end{aligned}$$
    (29)

    being \(p_j\) and \(r_j\) the Precision and Recall for the jth label respectively.

  • Micro Precision It measures the Precision averaged over all example/label pairs

    $$\begin{aligned} \text {Micro}\_\text {Precision} = \frac{\sum _{j = 1}^{q}tp_j}{\sum _{j = 1}^{q}tp_j + \sum _{j = 1}^{q}fp_j} \end{aligned}$$
    (30)
  • Micro-Recall It is the Recall averaged over all example/label pairs

    $$\begin{aligned} \text {Micro}\_\text {Recall} = \frac{\sum _{j = 1}^{q}tp_j}{\sum _{j = 1}^{q}tp_j + \sum _{j = 1}^{q}fn_j} \end{aligned}$$
    (31)
  • Micro-F1 Micro-F1 is simply the harmonic mean between Micro Precision and Micro-Recall

    $$\begin{aligned}&\text {Micro}\_F1 \nonumber \\&= \frac{2 \times \text {Micro}\_\text {Precision} \times \text {Micro}\_\text {Recall}}{\text {Micro}\_\text {Precision} + \text {Micro}\_\text {Recall}} \end{aligned}$$
    (32)

Ranking-based measures

Ranking-based metrics [20, 22, 36, 43] principally measure the real-valuated function returned by the algorithm with its corresponding ranking function.

  • One-Error It evaluates how many times on average the top-ranked label is not relevant for an instance. It is defined as:

    $$\begin{aligned} \text {One}\_\text {Error}(f) = \frac{1}{N}\sum _{i = 1}^{N}I\left( \arg \max _{l \in \mathbf{L }}{f(\mathbf{x }_i, l}) \notin \mathbf{L }_i\right) \nonumber \\ \end{aligned}$$
    (33)

    where I is the indicator function defined above.

  • Coverage Coverage measures the number of steps, on average, that it is necessary to do going down the rank of labels to cover all the relevant labels of the labels.

    $$\begin{aligned} \text {Coverage}(f) = \frac{1}{N}\sum _{i = 1}^{N}\max _{l \in \mathbf{L }_i}{\mathrm{rank}\_f_{\mathbf{x }_i}(l)} - 1 \end{aligned}$$
    (34)
  • Ranking Loss It evaluates, on average, the proportion of label pairs that are reversely ordered for the particular instance. Formally: Let \(\mathbf{Z }_i = \{(l_n,l_m) \mid f(\mathbf{x }_i, lm) \le f(\mathbf{x }_i, ln), lm \in \mathbf{L }_i, ln \in \overline{\mathbf{L }}_i \}\), being \(\overline{\mathbf{L }}_i\) the complementary set of \(\mathbf{L }_i\). The ranking loss is defined as

    $$\begin{aligned} \text {Ranking}\_\text {Loss}(f) = \frac{1}{N}\sum _{i = 1}^{N}\frac{\left| \mathbf{Z }_i\right| }{\left| \mathbf{L }_i\right| \left| \overline{\mathbf{L }}_i\right| } \end{aligned}$$
    (35)
  • Average precision It is the average proportion of labels ranked above a relevant label. Let denote \(\varLambda _i = \left\{ l^{'} \mid \mathrm{rank}\_f_{\mathbf{x }_i}(l^{'}) \le \mathrm{rank}\_f_{\mathbf{x }_i}(l), l^{'} \in \mathbf{L }_i \right\} \). Average precision is defined as:

    $$\begin{aligned} \text {Average}\_\text {Precision}(f) = \frac{1}{N}\sum _{i = 1}^{N}\frac{\left| \varLambda _i\right| }{\mathrm{rank}\_f_{\mathbf{x }_i}(l)} \end{aligned}$$
    (36)

    For One-Error, Coverage and Ranking Loss the performance is better if the value is lower. However, a higher value of Average Precision implies a better performance.

Appendix B: Complete results from the experimental study

The complete results from the experimentation are shown in this section. In concrete, we present the results for each dataset used in the experiments for each metric. The results of the example-based classification metrics are presented in Tables 56, 789 and 10. The results of the label-based classification metrics are presented in Tables 11, 1213, 1415 and 16. The results of the example-based ranking metrics are presented in Tables 17, 1819 and 20. In those tables, the best result for each dataset, algorithm and noise level is marked in bold.

Table 5 Subset Accuracy results with \(0\%\), \(5\%\) and \(10\%\) of noise
Table 6 Hamming Loss results of each method obtained for each dataset with \(0\%\), \(5\%\) and \(10\%\) of noise
Table 7 Accuracy results of each method obtained for each dataset with \(0\%\), \(5\%\) and \(10\%\) of noise
Table 8 Precision results of each method obtained for each dataset with \(0\%\), \(5\%\) and \(10\%\) of noise
Table 9 Recall results of each method obtained for each dataset with \(0\%\), \(5\%\) and \(10\%\) of noise
Table 10 F1 results of each method obtained for each dataset with \(0\%\), \(5\%\) and \(10\%\) of noise
Table 11 Micro Precision results of each method obtained for each dataset with \(0\%\), \(5\%\) and \(10\%\) of noise
Table 12 Macro Precision results of each method obtained for each dataset with \(0\%\), \(5\%\) and \(10\%\) of noise
Table 13 Micro-Recall results of each method obtained for each dataset with \(0\%\), \(5\%\) and \(10\%\) of noise
Table 14 Macro-Recall results of each method obtained for each dataset with \(0\%\), \(5\%\) and \(10\%\) of noise
Table 15 Micro F1 results of each method obtained for each dataset with \(0\%\), \(5\%\) and \(10\%\) of noise
Table 16 Macro F1 results of each method obtained for each dataset with \(0\%\), \(5\%\) and \(10\%\) of noise
Table 17 Ranking Loss results of each method obtained for each dataset with \(0\%\), \(5\%\) and \(10\%\) of noise
Table 18 One-Error results of each method obtained for each dataset with \(0\%\), \(5\%\) and \(10\%\) of noise
Table 19 Coverage results of each method obtained for each dataset with \(0\%\), \(5\%\) and \(10\%\) of noise
Table 20 Average Precision results of each method obtained for each dataset with \(0\%\), \(5\%\) and \(10\%\) of noise

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Moral-García, S., Mantas, C.J., Castellano, J.G. et al. Ensemble of classifier chains and Credal C4.5 for solving multi-label classification. Prog Artif Intell 8, 195–213 (2019). https://doi.org/10.1007/s13748-018-00171-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13748-018-00171-x

Keywords

Navigation