Abstract
Conventional machine learning techniques may have lesser performance when they deal with complex data. For addressing this issue, it is important to build data mining frameworks coupled with robust knowledge discovery mechanisms. One of such frameworks, which addresses these issues is ensemble learning. It fuses data, builds models and mines data into a single framework. In spite of the work done on ensemble learning, there remain issues like how to manage the complexity, how to optimize the model, and how to fine-tune the model. Natural data processing schemes use parallel processing and are robust and efficient, hence are successful. Taking a cue from natural data processing architectures, we propose a parallelized CNN tree ensemble approach. The proposed approach is compared against the baseline which is the deep network used in the ensemble. The ResNet50 architecture is utilized for initial experimentation. The datasets used for this task are the ImageNet and natural images datasets. The proposed approach outperforms the baseline on all experiments on the ImageNet dataset. Further, benchmarking of the proposed approach against different types of CNNs is done on various datasets including CIFAR-10, CIFAR-100, Fashion-MNIST, FEI face recognition, and MNIST digits. Since our approach is adaptable for CNNs, it outperforms the baseline CNNs as well as the state-of-the-art techniques on these datasets. The CNNs architectures used for benchmarking are ResNet-50, DenseNet, WRN-28-10 and NSGANetV1. The code for the paper is available in https://github.com/mueedhafiz1982/CNNTreeEnsemble.git.
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs11042-022-13604-6/MediaObjects/11042_2022_13604_Fig1_HTML.png)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs11042-022-13604-6/MediaObjects/11042_2022_13604_Fig2_HTML.png)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs11042-022-13604-6/MediaObjects/11042_2022_13604_Fig3_HTML.png)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs11042-022-13604-6/MediaObjects/11042_2022_13604_Fig4_HTML.png)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs11042-022-13604-6/MediaObjects/11042_2022_13604_Fig5_HTML.png)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs11042-022-13604-6/MediaObjects/11042_2022_13604_Fig6_HTML.png)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs11042-022-13604-6/MediaObjects/11042_2022_13604_Fig7_HTML.png)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs11042-022-13604-6/MediaObjects/11042_2022_13604_Fig8_HTML.png)
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Data Availability
The code for the paper is available online at: https://github.com/mueedhafiz1982/CNNTreeEnsemble.git
The data that support the findings of this study are available from the following online sources:
ImageNet: https://www.image-net.org/
Natural Images: https://www.kaggle.com/datasets/prasunroy/natural-images
CIFAR-10: http://www.cs.toronto.edu/~kriz/cifar.html
CIFAR-100: http://www.cs.toronto.edu/~kriz/cifar.html
Fashion-MNIST: https://www.kaggle.com/datasets/zalando-research/fashionmnist
FEI Face Recognition Database: https://fei.edu.br/~cet/facedatabase.html
MNIST Handwritten Digit Dataset: http://yann.lecun.com/exdb/mnist/
References
Akhtar N, Shafait F, Mian A (2017) Efficient classification with sparsity augmented collaborative representation. Pattern Recognition 65:136–145. https://doi.org/10.1016/j.patcog.2016.12.017, http://www.sciencedirect.com/science/article/pii/S0031320316304289
Altman NS (1992) An introduction to kernel and nearest-neighbor nonparametric regression. The American Statistician 46(3):175–185. http://www.jstor.org/stable/2685209
Cai S, Zhang L, Zuo W, Feng X (2016) A probabilistic collaborative representation based approach for pattern classification. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 2950–2959. https://doi.org/10.1109/CVPR.2016.322
Chen Y, Keogh E, Begum N, Bagnall A, Mueen A, Batista G (2015) The ucr time series classification archive
Cortes C, Vapnik V (1995) Support-vector networks. Machine Learning 20(3):273–297. https://doi.org/10.1007/BF00994018https://doi.org/10.1007/BF00994018
Deng J, Dong W, Socher R, Li L, Kai Li, Li F-F (2009) Imagenet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp 248–255. https://doi.org/10.1109/CVPR.2009.5206848
Dikici E, Prevedello LM, Bigelow M, White RD, Erdal BS (2020) Constrained generative adversarial network ensembles for sharable synthetic data generation. arXiv:200300086
Dong X, Yu Z, Cao W, Shi Y, Ma Q (2020) A survey on ensemble learning. Front Comput Sci 14(2):241–258. https://doi.org/10.1007/s11704-019-8208-z
Freund Y, Schapire R (1999) A short introduction to boosting. Journal-Japanese Society For Artificial Intelligence 14(771–780):1612
Gashler M, Giraud-Carrier C, Martinez T (2008) Decision tree ensemble: Small heterogeneous is better than large homogeneous. In: 2008 Seventh International Conference on Machine Learning and Applications. https://doi.org/10.1109/ICMLA.2008.154, https://doi.ieeecomputersociety.org/10.1109/ICMLA.2008.154. IEEE Computer Society, Los Alamitos
Hafiz AM, Bhat GM (2020) A survey on instance segmentation: state of the art. Int J Multimed Inf Retr 9(3):171–189
Hafiz AM, Bhat GM (2020) A survey of deep learning techniques for medical diagnosis. In: Tuba M, Akashe S, Joshi A (eds) Information and Communication Technology for Sustainable Development. Springer Singapore, Singapore, pp 161–170
Hafiz AM, Parah SA, Bhat RUA (2021) Attention mechanisms and deep learning for machine vision: A survey of the state of the art. https://doi.org/10.48550/ARXIV.2106.07550, https://arxiv.org/abs/2106.07550
Hafiz AM, Hassaballah M (2021) Digit image recognition using an ensemble of one-versus-all deep network classifiers. In: Kaiser MS, Xie J, Rathore VS (eds) Information and Communication Technology for Competitive Strategies (ICTCS 2020). Springer Singapore, Singapore, pp 445–455
Hafiz AM, Bhat GM (2020) Deep network ensemble learning applied to image classification using cnn trees. https://doi.org/10.48550/ARXIV.2008.00829, https://arxiv.org/abs/2008.00829
Hafiz AM, Bhat RUA, Parah SA, Hassaballah M (2021) Se-md: A single-encoder multiple-decoder deep network for point cloud generation from 2d images. https://doi.org/10.48550/ARXIV.2106.15325, https://arxiv.org/abs/2106.15325
Hassaballah M, Awad AI (2020) Deep learning in computer vision: principles and applications
He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778
He K, Zhang X, Ren S, Sun J (2016) Identity mappings in deep residual networks. In: European conference on computer vision, Springer, pp 630–645
Huang G, Liu Z, Maaten LVD, Weinberger KQ (2017) Densely connected convolutional networks. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 2261–2269. https://doi.org/10.1109/CVPR.2017.243
Ismail Fawaz H, Forestier G, Weber J, Idoumghar L, Muller P (2019) Deep neural network ensembles for time series classification. In: 2019 International Joint Conference on Neural Networks (IJCNN), pp 1–6. https://doi.org/10.1109/IJCNN.2019.8852316
Jena B, Saxena S, Nayak GK, Saba L, Sharma N, Suri JS (2021) Artificial intelligence-based hybrid deep learning models for image classification: the first narrative review. Comput Biol Med 137:104803. https://doi.org/10.1016/j.compbiomed.2021.104803, https://www.sciencedirect.com/science/article/pii/S0010482521005977
Kandaswamy C, Silva LM, Alexandre LA, Santos JM (2015) Deep transfer learning ensemble for classification. In: Rojas I, Joya G, Catala A (eds) Advances in Computational Intelligence. Springer International Publishing, Cham, pp 335–348
Khan S, Naseer M, Hayat M, Zamir SW, Khan FS, Shah M (2021) Transformers in vision: A survey. https://doi.org/10.1145/3505244, https://doi.org/10.1145/3505244
Krizhevsky A, et al. (2009) Learning multiple layers of features from tiny images
Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems, pp 1097–1105
Kuncheva LI, Whitaker CJ (2003) Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy. Machine Learning 51 (2):181–207. https://doi.org/10.1023/A:1022859003006https://doi.org/10.1023/A:1022859003006
LeCun Y, Cortes C (2010) Mnist handwritten digit database
Lu Z, Whalen I, Dhebar Y, Deb K, Goodman E, Banzhaf W, Boddeti VN (2020) Multi-objective evolutionary design of deep convolutional neural networks for image classification. https://doi.org/10.1109/TEVC.2020.3024708https://doi.org/10.1109/TEVC.2020.3024708
Ma Y, Niu B, Qi Y (2021) Survey of image classification algorithms based on deep learning. In: bin Ahmad BH, Cen F (eds) 2nd International Conference on Computer Vision, Image, and Deep Learning, International Society for Optics and Photonics, SPIE, vol 11911, pp 422–427. https://doi.org/10.1117/12.2604526https://doi.org/10.1117/12.2604526
Machado GR, Silva E, Goldschmidt RR (2021) Adversarial machine learning in image classification: A survey toward the defender’s perspective, vol 55. https://doi.org/10.1145/3485133,
Mai Z, Li R, Jeong J, Quispe D, Kim H, Sanner S (2022) Online continual learning in image classification: An empirical survey. Neurocomputing 469:28–51. https://doi.org/10.1016/j.neucom.2021.10.021, https://www.sciencedirect.com/science/article/pii/S0925231221014995
Nozza D, Fersini E, Messina E (2016) Deep learning and ensemble methods for domain adaptation. In: 2016 IEEE 28th International conference on tools with artificial intelligence (ICTAI), pp 184–189. https://doi.org/10.1109/ICTAI.2016.0037
Parimala M, Swarna Priya RM, Praveen Kumar Reddy M, Lal Chowdhary C, Kumar Poluru R, Khan S (2021) Spatiotemporal-based sentiment analysis on tweets for risk assessment of event using deep learning approach. Software: Practice and Experience 51 (3):550–570. https://doi.org/10.1002/spe.2851, https://onlinelibrary.wiley.com/doi/abs/10.1002/spe.2851
Plested J, Gedeon T (2022) Deep transfer learning for image classification: a survey. https://arxiv.org/abs/2205.09904
Reddy GT, Bhattacharya S, Siva Ramakrishnan S, Chowdhary CL, Hakak S, Kaluri R, Praveen Kumar Reddy M (2020) An ensemble based machine learning model for diabetic retinopathy classification. In: 2020 International Conference on Emerging Trends in Information Technology and Engineering (ic-ETITE), pp 1–6. https://doi.org/10.1109/ic-ETITE47903.2020.235
Roy P, Ghosh S, Bhattacharya S, Pal U (2018) Effects of degradations on deep neural network architectures. arXiv:180710108
Roy D, Panda P, Roy K (2020) Tree-cnn: A hierarchical deep convolutional neural network for incremental learning. Neural Networks 121:148–160. https://doi.org/10.1016/j.neunet.2019.09.010, http://www.sciencedirect.com/science/article/pii/S0893608019302710
Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein M, Berg AC, Fei-Fei L (2015) Imagenet large scale visual recognition challenge. International Journal of Computer Vision 115(3):211–252. https://doi.org/10.1007/s11263-015-0816-y
Schmarje L, Santarossa M, Schröder SM, Koch R (2021) A survey on semi-, self- and unsupervised learning for image classification. IEEE Access 9:82146–82168. https://doi.org/10.1109/ACCESS.2021.3084358https://doi.org/10.1109/ACCESS.2021.3084358
Sollich P, Krogh A (1996) Learning with ensembles: How overfitting can be useful. In: Advances in neural information processing systems, pp 190–196
Somayaji SRK, Alazab M, MK M, Bucchiarone A, Chowdhary CL, Gadekallu TR (2020) A Framework for Prediction and Storage of Battery Life in IoT Devices using DNN and Blockchain. In: 2020 IEEE Globecom Workshops (GC Wkshps), pp 1–6. https://doi.org/10.1109/GCWkshps50303.2020.9367413
Swarna Priya RM, Praveen Kumar Reddy M, Parimala M, Srinivas K, Thippa Reddy G, Chiranji Lal C, Mamoun A (2020) An effective feature engineering for dnn using hybrid pca-gwo for intrusion detection in iomt architecture. Comput Commun 160:139–149. https://doi.org/10.1016/j.comcom.2020.05.048, https://www.sciencedirect.com/science/article/pii/S014036642030298X
Tao S (2019) Deep neural network ensembles. In: Nicosia G, Pardalos P, Umeton R, Giuffrida G, Sciacca V (eds) Machine Learning, Optimization, and Data Science. Springer International Publishing, Cham, pp 1–12
Thippa R , Swarna Priya RM, Parimala M, Chowdhary CL, Hakak S, Khan WZ (2020) A deep neural networks based model for uninterrupted marine environment monitoring. Comput Commun 157:64–75. Elsevier https://doi.org/10.1016/j.comcom.2020.04.004https://www.sciencedirect.com/science/article/pii/S0140366420300542https://www.sciencedirect.com/science/article/pii/S0140366420300542
Thomaz CE, Giraldi GA (2010) A new ranking method for principal components analysis and its application to face image analysis. Image Vis Comput 28(6):902–913. https://doi.org/10.1016/j.imavis.2009.11.005, http://www.sciencedirect.com/science/article/pii/S0262885609002613
Wright J, Ma Y, Mairal J, Sapiro G, Huang TS, Yan S (2010) Sparse representation for computer vision and pattern recognition. Proceedings of the IEEE 98(6):1031–1044. https://doi.org/10.1109/JPROC.2010.2044470https://doi.org/10.1109/JPROC.2010.2044470
Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv:170807747
Xie S, Girshick R, Dollár P, Tu Z, He K (2017) Aggregated residual transformations for deep neural networks. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 5987–5995. https://doi.org/10.1109/CVPR.2017.634
Xu Y, Zhang D, Yang J, Yang J (2011) A two-phase test sample sparse representation method for use with face recognition. IEEE Trans Circuits Syst Video Technol 21(9):1255–1262. https://doi.org/10.1109/TCSVT.2011.2138790https://doi.org/10.1109/TCSVT.2011.2138790
You S, Xu C, Xu C, Tao D (2018) Learning with single-teacher multi-student. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol 32
Zeng S, Yang X, Gou J (2017) Multiplication fusion of sparse and collaborative representation for robust face recognition. Multimed Tools Appl 76(20):20889–20907. https://doi.org/10.1007/s11042-016-4035-5https://doi.org/10.1007/s11042-016-4035-5
Zhang L, Yang M, Xiangchu F (2011) Sparse representation or collaborative representation: Which helps face recognition?. In: 2011 International Conference on Computer Vision, pp 471–478. https://doi.org/10.1109/ICCV.2011.6126277
Zhou J, Zhang B (2019) Collaborative representation using non-negative samples for image classification. Sensors 19(11):2609
Zhou J, Zeng S, Zhang B (2020) Two-stage knowledge transfer framework for image classification. Pattern Recognition 107:107529. https://doi.org/10.1016/j.patcog.2020.107529, http://www.sciencedirect.com/science/article/pii/S0031320320303320
Zhong Z, Zheng L, Kang G, Li S, Yang Y (2020) Random erasing data augmentation. Proceedings of the AAAI Conference on Artificial Intelligence 34(07):13001–13008. https://doi.org/10.1609/aaai.v34i07.7000https://doi.org/10.1609/aaai.v34i07.7000, https://ojs.aaai.org/index.php/AAAI/article/view/7000
Funding
The work has not received any type of funding.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of Interests
The authors declare no conflict of interest.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Hafiz, A.M., Bhat, R.A. & Hassaballah, M. Image classification using convolutional neural network tree ensembles. Multimed Tools Appl 82, 6867–6884 (2023). https://doi.org/10.1007/s11042-022-13604-6
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11042-022-13604-6