Abstract
AdaBoost uses the weights assigned to samples to make the latest weak hypothesis adapt to classification mistakes of existing weak hypotheses. However, AdaBoost is very sensitive to the outliers and the existing hypotheses cannot be further trained to cooperate with the newer one. We proposed a new algorithm which prepares all weak hypotheses from the beginning of the training and trains all of them in parallel. Thus, the weak hypotheses are able to cooperate with each other during training. Also, we changed the function which update the weights of the samples to suppress the effects of the weights of outliers. We compared the performances of the new algorithm on several error-correcting output codes and weak hypothesis types. It was found that the proposed PCEL improves the accuracies of multi-class classification task in most datasets.
Keywords
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Dheeru, D., Karra Taniskidou, E.: UCI machine learning repository (2017). http://archive.ics.uci.edu/ml
Dietterich, T.G., Bakiri., G.: Solving multiclass learning problems via error-correcting output codes. J. Artif. Intell. Res. 2(1), 263–286 (1995)
Freund, Y., Schapire., R.E.: A decision-theoretic generalization of on-line learning and an application to boosting. J. Comput. Syst. Sci. 55, 119–139 (1997)
Gambs, S., Kégl, B., Aïmeur, E.: Privacy-preserving boosting. Data Mining Knowl. Discov. 14(1), 131–170 (2007)
Hsu, C.W., Lin, C.J.: A comparison of methods for multiclass support vector machines. IEEE Trans. Neural Networks 13(2), 415–425 (2002)
Martin, M.U.B., Pujol, O., l. Torre, F.D., Escalera, S.: Error-correcting factorization. IEEE Trans. Pattern Anal. Mach. Intell. 40(10), 2388–2401 (2018)
Palit, I., Reddy., C.K.: Scalable and parallel boosting with mapreduce. IEEE Trans. Knowl. Data Eng. 24(10), 1904–1916 (2012)
Schapire, R.E.: Using output codes to boost multiclass learning problems. In: 14th International Conference on Machine Learning, pp. 313–321. Morgan Kaufmann Publishers Inc., San Francisco (1997)
Tieleman, T., Hinton, G.: Lecture 6.5–RmsProp: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning (2012)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Switzerland AG
About this paper
Cite this paper
Utsumi, S., Kameyama, K. (2018). Parallel Cooperative Ensemble Learning by Adaptive Data Weighting and Error-Correcting Output Codes. In: Cheng, L., Leung, A., Ozawa, S. (eds) Neural Information Processing. ICONIP 2018. Lecture Notes in Computer Science(), vol 11303. Springer, Cham. https://doi.org/10.1007/978-3-030-04182-3_59
Download citation
DOI: https://doi.org/10.1007/978-3-030-04182-3_59
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-04181-6
Online ISBN: 978-3-030-04182-3
eBook Packages: Computer ScienceComputer Science (R0)