Skip to main content

Boosted Residual Networks

  • Conference paper
  • First Online:
Engineering Applications of Neural Networks (EANN 2017)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 744))

Abstract

In this paper we present a new ensemble method, called Boosted Residual Networks, which builds an ensemble of Residual Networks by growing the member network at each round of boosting. The proposed approach combines recent developements in Residual Networks - a method for creating very deep networks by including a shortcut layer between different groups of layers - with the Deep Incremental Boosting, which has been proposed as a methodology to train fast ensembles of networks of increasing depth through the use of boosting. We demonstrate that the synergy of Residual Networks and Deep Incremental Boosting has better potential than simply boosting a Residual Network of fixed structure or using the equivalent Deep Incremental Boosting without the shortcut layers.

G.D. Magoulas—The authors gratefully acknowledge the support of NVIDIA Corporation with the donation of the Tesla Titan X Pascal GPUs used for this research.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    In some cases BRN is actually faster than DIB, but we believe this to be just noise due to external factors such as system load.

References

  1. Breiman, L.: Bagging predictors. Mach. Learn. 24(2), 123–140 (1996)

    MATH  Google Scholar 

  2. Freund, Y., Iyer, R., Schapire, R.E., Singer, Y.: An efficient boosting algorithm for combining preferences. J. Mach. Learn. Res. 4, 933–969 (2003)

    MathSciNet  MATH  Google Scholar 

  3. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385 (2015)

  4. He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: surpassing human-level performance on imagenet classification. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1026–1034 (2015)

    Google Scholar 

  5. He, K., Zhang, X., Ren, S., Sun, J.: Identity mappings in deep residual networks. arXiv preprint arXiv:1603.05027 (2016)

  6. Huang, G., Liu, Z., Weinberger, K.Q.: Densely connected convolutional networks. arXiv preprint arXiv:1608.06993 (2016)

  7. Krizhevsky, A., Hinton, G.: Learning multiple layers of features from tiny images (2009)

    Google Scholar 

  8. Lecun, Y., Cortes, C.: The MNIST database of handwritten digits. http://yann.lecun.com/exdb/mnist/

  9. Mosca, A., Magoulas, G.: Deep incremental boosting. In: Benzmuller, C., Sutcliffe, G., Rojas, R. (eds.) GCAI 2016, 2nd Global Conference on Artificial Intelligence. EPiC Series in Computing, vol. 41, pp. 293–302. EasyChair (2016)

    Google Scholar 

  10. Mosca, A., Magoulas, G.D.: Regularizing deep learning ensembles by distillation. In: 6th International Workshop on Combinations of Intelligent Methods and Applications (CIMA 2016), p. 53 (2016)

    Google Scholar 

  11. Mosca, A., Magoulas, G.D.: Training convolutional networks with weight-wise adaptive learning rates. In: ESANN 2017 Proceedings, European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, Bruges, Belgium, 26–28 April 2017 (2017, in press). i6doc.com

  12. Schapire, R.E.: The strength of weak learnability. Mach. Learn. 5, 197–227 (1990)

    Google Scholar 

  13. Schapire, R.E., Freund, Y.: Experiments with a new boosting algorithm. In: Machine Learning: Proceedings of the Thirteenth International Conference, pp. 148–156 (1996)

    Google Scholar 

  14. Veit, A., Wilber, M., Belongie, S.: Residual networks behave like ensembles of relatively shallow networks. arXiv e-prints, May 2016

    Google Scholar 

  15. Yosinski, J., Clune, J., Bengio, Y., Lipson, H.: How transferable are features in deep neural networks? In: Advances in Neural Information Processing Systems, pp. 3320–3328 (2014)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alan Mosca .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Mosca, A., Magoulas, G.D. (2017). Boosted Residual Networks. In: Boracchi, G., Iliadis, L., Jayne, C., Likas, A. (eds) Engineering Applications of Neural Networks. EANN 2017. Communications in Computer and Information Science, vol 744. Springer, Cham. https://doi.org/10.1007/978-3-319-65172-9_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-65172-9_12

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-65171-2

  • Online ISBN: 978-3-319-65172-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics