Skip to main content

GloNets: Globally Connected Neural Networks

  • Conference paper
  • First Online:
Advances in Intelligent Data Analysis XXII (IDA 2024)

Abstract

Deep learning architectures suffer from depth-related performance degradation, limiting the effective depth of neural networks. Approaches like ResNet are able to mitigate this, but they do not completely eliminate the problem. We introduce Global Neural Networks (GloNet), a novel architecture overcoming depth-related issues, designed to be superimposed on any model, enhancing its depth without increasing complexity or reducing performance. With GloNet, the network’s head uniformly receives information from all parts of the network, regardless of their level of abstraction. This enables GloNet to self-regulate information flow during training, reducing the influence of less effective deeper layers, and allowing for stable training irrespective of network depth. This paper details GloNet’s design, its theoretical basis, and a comparison with existing similar architectures. Experiments show GloNet’s capability to self-regulate, and its resilience to depth-related learning challenges, such as performance degradation. Our findings position GloNet as a viable alternative to traditional architectures like ResNets.

A. Di Cecco—National PhD in AI, XXXVIII cycle, health and life sciences, UCBM.

C. Metta—EU Horizon 2020: G.A. 871042 SoBig-Data++, NextGenEU - PNRR-PEAI (M4C2, investment 1.3) FAIR and “SoBigData.it”.

F. Morandin and M. Parton—Funded by INdAM groups GNAMPA and GNSAGA.

A. Di Cecco, C. Metta, M. Fantozzi, F. Morandin, M. Parton—Computational resources provided by CLAI laboratory, Chieti-Pescara and Italy.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Ba, L.J., Kiros, J.R., Hinton, G.E.: Layer normalization. CoRR abs/1607.06450 (2016). http://arxiv.org/abs/1607.06450

  2. Ballester-Ripoll, R., Paredes, E.G.: SGEMM GPU Kernel Performance dataset (2018). https://archive.ics.uci.edu/ml/datasets/SGEMM+GPU+kernel+performance

  3. Ballester-Ripoll, R., Paredes, E.G., Pajarola, R.: Sobol tensor trains for global sensitivity analysis. RE & SS 183, 311–322 (2019)

    Google Scholar 

  4. Bengio, Y., Courville, A., Vincent, P.: Representation learning: a review and new perspectives. IEEE TPAMI 35(8), 1798–1828 (2013)

    Article  Google Scholar 

  5. Bengio, Y., Simard, P., Frasconi, P.: Learning long-term dependencies with gradient descent is difficult. IEEE TNN 5(2), 157–166 (1994)

    Google Scholar 

  6. Di Cecco, A.: GloNet repository (2024). https://github.com/AntonioDiCecco/GloNet

  7. Dosovitskiy, A., et al.: An image is worth 16x16 words: transformers for image recognition at scale. In: ICLR (2021). https://openreview.net/forum?id=YicbFdNTTy

  8. Ebrahimi, M.S., Abadi, H.K.: Study of residual networks for image recognition. In: LNNS. vol. 284, pp. 754–763 (2021)

    Google Scholar 

  9. Gianfagna, L., Di Cecco, A.: Explainable AI with Python. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-68640-6

    Book  Google Scholar 

  10. Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: JMLR. JMLR, vol. 9, pp. 249–256 (2010)

    Google Scholar 

  11. He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: surpassing human-level performance on imagenet classification. In: ICCV, pp. 1026–1034 (2015)

    Google Scholar 

  12. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: IEEE CVPR, pp. 770–778 (2016)

    Google Scholar 

  13. He, K., Zhang, X., Ren, S., Sun, J.: Identity mappings in deep residual networks. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9908, pp. 630–645. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46493-0_38

    Chapter  Google Scholar 

  14. Huang, G., Liu, Z., Maaten, L.V.D., Weinberger, K.Q.: Densely connected convolutional networks. In: CVPR, pp. 2261–2269 (2017)

    Google Scholar 

  15. Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. In: ICML, pp. 448-456 (2015)

    Google Scholar 

  16. LeCun, Y.A., Bottou, L., Orr, G.B., Müller, K.-R.: Efficient BackProp. In: Montavon, G., Orr, G.B., Müller, K.-R. (eds.) Neural Networks: Tricks of the Trade. LNCS, vol. 7700, pp. 9–48. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-35289-8_3

    Chapter  Google Scholar 

  17. Martin, C.H., Mahoney, M.W.: Implicit self-regularization in deep neural networks. JMLR 22, 1–165 (2021). http://jmlr.org/papers/v22/20-410.html

    Google Scholar 

  18. Metta, C., et al.: Increasing biases can be more efficient than increasing weights. In: IEEE/CVF WACV (2024)

    Google Scholar 

  19. Morandin, F., Amato, G., Fantozzi, M., Gini, R., Metta, C., Parton, M.: SAI: a sensible artificial intelligence that plays with handicap and targets high scores in 9\(\times \)9 go. In: ECAI, pp. 403–410 (2020). https://doi.org/10.3233/FAIA200119

  20. Morandin, F., Amato, G., Gini, R., Metta, C., Parton, M., Pascutto, G.: SAI: a sensible Artificial Intelligence that plays Go. In: IJCNN, pp. 1–8 (2019)

    Google Scholar 

  21. Panda, P., Sengupta, A., Roy, K.: Conditional deep learning for energy-efficient and enhanced pattern recognition. In: DATE, pp. 475-480 (2016)

    Google Scholar 

  22. Pasqualini, L., et al.: Score vs. winrate in score-based games: which reward for reinforcement learning? In: ICMLA, pp. 573–578 (2022)

    Google Scholar 

  23. Rajesh, G.: A benchmark repository of Early Exit Neural Networks in .onnx format (2021). https://github.com/gorakraj/earlyexit_onnx

  24. Saxe, A., McClelland, J., Ganguli, S.: Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. In: ICLR (2014)

    Google Scholar 

  25. Silver, D., et al.: Mastering the game of go with deep neural networks and tree search. Nature 529(7587), 484–489 (2016)

    Article  Google Scholar 

  26. Srivastava, R.K., Greff, K., Schmidhuber, J.: Training very deep networks. In: NeurIPS, pp. 2377–2385 (2015)

    Google Scholar 

  27. Teerapittayanon, S., McDanel, B., Kung, H.T.: Branchynet: fast inference via early exiting from deep neural networks. In: ICPR (2016). https://doi.org/10.1109/ICPR.2016.7900006

  28. Wu, D.J.: Accelerating Self-Play Learning in Go. AAAI20-RLG workshop (2020). https://arxiv.org/abs/1902.10565

  29. Zhang, C., Bengio, S., Hardt, M., Recht, B., Vinyals, O.: Understanding deep learning requires rethinking generalization. In: ICLR (2017)

    Google Scholar 

  30. Zhang, C., Bengio, S., Hardt, M., Recht, B., Vinyals, O.: Understanding deep learning (still) requires rethinking generalization. Commun. ACM 64(3), 107–115 (2021)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Maurizio Parton .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Di Cecco, A., Metta, C., Fantozzi, M., Morandin, F., Parton, M. (2024). GloNets: Globally Connected Neural Networks. In: Miliou, I., Piatkowski, N., Papapetrou, P. (eds) Advances in Intelligent Data Analysis XXII. IDA 2024. Lecture Notes in Computer Science, vol 14641. Springer, Cham. https://doi.org/10.1007/978-3-031-58547-0_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-58547-0_5

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-58546-3

  • Online ISBN: 978-3-031-58547-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics