Skip to main content

Adaptive Consensus-Based Ensemble for Improved Deep Learning Inference Cost

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 12893))

Abstract

Deep learning models are continuously improving the state-of-the-art in nearly every domain, achieving increased levels of accuracy. To sustain, however, this performance, these models have become larger and more computationally intensive at a staggering rate. Using an ensemble of deep learning models to improve the accuracy (in comparison to running a single model) is a well-known approach, but using it in real-world settings is challenging due to its exuberant inference computational cost. In this paper we present a novel method for reducing the cost associated with an ensemble of models by \(\sim \)50% on average while maintaining comparable accuracy. The method proposed is simple to implement, and is fully agnostic to the model and the problem domain. The experimental results presented demonstrate that our method can be used in a number of configurations, all of which provide a much better “performance per cost” than standard ensembles, whether using an ensemble of N instances of the same model architecture (trained from scratch each time), or an ensemble of completely different models.

Nathan Netanyahu is also affiliated with the Department of Computer Science at the College of Law and Business, Ramat-Gan 5257346, Israel

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Breiman, L.: Bagging predictors. Mach. Learn. 24(2), 123–140 (1996)

    MATH  Google Scholar 

  2. Freund, Y., Schapire, R.E.: Experiments with a new boosting algorithm. In: International Conference on Machine Learning, vol. 96, pp. 148–156 (1996)

    Google Scholar 

  3. Gal, Y., Ghahramani, Z.: Dropout as a Bayesian approximation: representing model uncertainty in deep learning. In: International Conference on Machine Learning, pp. 1050–1059 (2016)

    Google Scholar 

  4. Hansen, L.K., Salamon, P.: Neural network ensembles. IEEE Trans. Pattern Anal. Mach. Intell. 12(10), 993–1001 (1990)

    Article  Google Scholar 

  5. Hawkins, J.: A Thousand Brains: A New Theory of Intelligence. Basic Books, New York (2021)

    Google Scholar 

  6. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  7. Huang, G., Li, Y., Pleiss, G., Liu, Z., Hopcroft, J.E., Weinberger, K.Q.: Snapshot ensembles: train 1, get \(M\) for free. arXiv preprint arXiv:1704.00109 (2017)

  8. Inoue, H.: Adaptive ensemble prediction for deep neural networks based on confidence level. In: The 22nd International Conference on Artificial Intelligence and Statistics, pp. 1284–1293 (2019)

    Google Scholar 

  9. Krogh, A., Vedelsby, J.: Neural network ensembles, cross validation, and active learning. Adv. Neural Inf. Process. Syst. 7, 231 (1995)

    Google Scholar 

  10. Lakshminarayanan, B., Pritzel, A., Blundell, C.: Simple and scalable predictive uncertainty estimation using deep ensembles. arXiv preprint arXiv:1612.01474 (2016)

  11. Lee, S., Purushwalkam, S., Cogswell, M., Crandall, D., Batra, D.: Why \(M\) heads are better than one: training a diverse ensemble of deep networks. arXiv preprint arXiv:1511.06314 (2015)

  12. Loshchilov, I., Hutter, F.: SGDR: stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983 (2016)

  13. Naftaly, U., Intrator, N., Horn, D.: Optimal ensemble averaging of neural networks. Netw.: Comput. Neural Syst. 8(3), 283–296 (1997)

    Article  Google Scholar 

  14. Opitz, D., Maclin, R.: Popular ensemble methods: an empirical study. J. Artif. Intell. Res. 11, 169–198 (1999)

    Article  Google Scholar 

  15. Pollack, J.B.: Backpropagation is sensitive to initial conditions. Complex Syst. 4, 269–280 (1990)

    MATH  Google Scholar 

  16. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)

  17. Tsunoda, K., Yamane, Y., Nishizaki, M., Tanifuji, M.: Complex objects are represented in macaque inferotemporal cortex by the combination of feature columns. Nat. Neurosci. 4(8), 832–838 (2001)

    Article  Google Scholar 

  18. Wen, Y., Tran, D., Ba, J.: Batchensemble: an alternative approach to efficient ensemble and lifelong learning. arXiv preprint arXiv:2002.06715 (2020)

  19. Xie, J., Xu, B., Chuang, Z.: Horizontal and vertical ensemble with deep representation for classification. arXiv preprint arXiv:1306.2759 (2013)

  20. Zhou, Z.H., Wu, J., Tang, W.: Ensembling neural networks: many could be better than all. Artif. Intell. 137(1–2), 239–263 (2002)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nelly David .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

David, N., Netanyahu, N.S. (2021). Adaptive Consensus-Based Ensemble for Improved Deep Learning Inference Cost. In: Farkaš, I., Masulli, P., Otte, S., Wermter, S. (eds) Artificial Neural Networks and Machine Learning – ICANN 2021. ICANN 2021. Lecture Notes in Computer Science(), vol 12893. Springer, Cham. https://doi.org/10.1007/978-3-030-86365-4_27

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-86365-4_27

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-86364-7

  • Online ISBN: 978-3-030-86365-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics