Skip to main content

Distributed Training of Generative Adversarial Networks for Fast Detector Simulation

  • Conference paper
  • First Online:
High Performance Computing (ISC High Performance 2018)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 11203))

Included in the following conference series:

Abstract

The simulation of the interaction of particles in High Energy Physics detectors is a computing intensive task. Since some level of approximation is acceptable, it is possible to implement fast simulation simplified models that have the advantage of being less computationally intensive. Here we present a fast simulation based on Generative Adversarial Networks (GANs). The model is constructed from a generative network describing the detector response and a discriminative network, trained in adversarial manner. The adversarial training process is compute-intensive and the application of a distributed approach becomes particularly important. We present scaling results of a data-parallel approach to distribute GANs training across multiple nodes on TACC’s Stampede2. The efficiency achieved was above 94% when going from 1 to 128 Xeon Scalable Processor nodes. We report on the accuracy of the generated samples and on the scaling of time-to-solution. We demonstrate how HPC installations could be utilized to globally optimize this kind of models leading to quicker research cycles and experimentation, thanks to their large computation power and excellent connectivity.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://www.tacc.utexas.edu/.

References

  1. The WorldWide LHC Grid. http://wlcg.web.cern.ch

  2. Bird, I.: Workshop introduction, context of the workshop: Half-way through run2; preparing for run3, run4, WLCG Workshop (2016)

    Google Scholar 

  3. Amadio, G., et al.: Geantv: from CPU to accelerators. J. Phys.: Conf. Ser. 762, 012019 (2016)

    Google Scholar 

  4. Gheata, A., et al.: GeantV apha-release preview. In: ACAT 2017 Conference Proceedings to be Published in Journal of Physics: Conference Series

    Google Scholar 

  5. Vallecorsa, S.: Generative models for fast simulation. In: ACAT 2017 Conference Proceedings to be Published in Journal of Physics: Conference Series

    Google Scholar 

  6. Goodfellow, I.J., et al.: Generative Adversarial Networks, ArXiv e-prints, June 2014

    Google Scholar 

  7. Vannerem, P., Mueller, K., Schoelkopf, B., Smola, A., Soldner-Rembold, S.: Classifying LEP Data with Support Vector Algorithms, ArXiv High Energy Physics - Experiment e-prints, May 1999

    Google Scholar 

  8. Bock, R.K., et al.: Methods for multidimensional event classification: a case study using images from a cherenkov gamma-ray telescope. In: Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, vol. 516, no. 2, pp. 511–528 (2004)

    Article  Google Scholar 

  9. Whiteson, S., Whiteson, D.: Machine learning for event selection in high energy physics. Eng. Appl. Artif. Intell. 22(8), 1203–1217 (2009)

    Article  Google Scholar 

  10. Vitek, A., Stachon, M., Krmer, P., Snel, V.: Towards the modeling of atomic and molecular clusters energy by support vector regression. In: 2013 5th International Conference on Intelligent Networking and Collaborative Systems, pp. 121–126, September 2013

    Google Scholar 

  11. Gligorov, V.V., Williams, M.: Efficient, reliable and fast high-level triggering using a bonsai boosted decision tree. J. Instrum. 8(02), P02013 (2013)

    Article  Google Scholar 

  12. Goodfellow, I.J.: On distinguishability criteria for estimating generative models, ArXiv e-prints, December 2014

    Google Scholar 

  13. Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial networks, CoRR, vol. abs/1511.06434 (2015)

    Google Scholar 

  14. Odena, A., Olah, C., Shlens, J.: Conditional Image Synthesis With Auxiliary Classifier GANs, ArXiv e-prints, October 2016

    Google Scholar 

  15. de Oliveira, L., Paganini, M., Nachman, B.: Learning particle physics by example: location-aware generative adversarial networks for physics synthesis, arXiv preprint arXiv:1701.05927 (2017)

  16. Paganini, M., de Oliveira, L., Nachman, B.: Calogan: simulating 3D high energy particle showers in multi-layer electromagnetic calorimeters with generative adversarial networks, arXiv preprint arXiv:1705.02355 (2017)

  17. Spiropulu, M., Anderson, D., Vlimant, J.: A MPI-based Python Framework for Distributed Training with Keras. arXiv:1712.05878 [cs.DC]

  18. open MPI Team, “Message Passing Interface”

    Google Scholar 

  19. Abadi, M., et al.: TensorFlow: a system for large-scale machine learning. In: Proceedings of the 12th USENIX Conference on Operating Systems Design and Implementation OSDI 2016, pp. 265–283, Berkeley, CA, USA. USENIX Association (2016)

    Google Scholar 

  20. Google Inc., “GRPC: A high performance, open-source universal RPC framework”

    Google Scholar 

  21. Machine learning toolkit for extreme scale (matex). https://github.com/matex-org/matex

  22. Baidu allreduce. https://github.com/baidu-research/baidu-allreduce

  23. Horovod: distributed training framework for tensorflow. https://github.com/uber/horovod

  24. CERN, Geant4, July 2017. Accessed 31 July 2017

    Google Scholar 

  25. Carminati, F., et al.: Calorimetry with deep learning: particle classification, energy regression, and simulation for high-energy physics. In: NIPS (2017)

    Google Scholar 

  26. The CLIC collaboration, “Conceptual Design Report”

    Google Scholar 

  27. Vallecorsa, S., Carminati, F., Khattak, G., et al.: Three dimensional generative adversarial networks for fast simulation. In: ACAT 2017 Conference Proceedings to be Published in Journal of Physics: Conference Series

    Google Scholar 

  28. Hinton, G., Srivastava, N., Swersky, K.: Lecture 6a overview of minibatch gradi-ent descent (2012)

    Google Scholar 

  29. Chollet, F., et al.: Keras (2015). https://github.com/keras-team/keras

  30. Abadi, M., et al.: TensorFlow: large-scale machine learning on heterogeneous systems (2015). Software available from, www.tensorflow.org

  31. Chintala, S., et al.: How to Train a GAN? Tips and tricks to make GANs work

    Google Scholar 

  32. Keskar, N.S., et al.: On large-batch training for deep learning: Generalization gap and sharp minima

    Google Scholar 

  33. Chaudhari, P., et al.: Entropy-SGD: Biasing gradient descent into wide valleys

    Google Scholar 

  34. Dinh, L., et al.: Sharp minima can generalize for deep nets

    Google Scholar 

  35. Huang, G.: Snapshot ensembles: Train 1, get M for free

    Google Scholar 

  36. Krizhevsky, A.: One weird trick for parallelizing convolutional neural networks, CoRR, vol. abs/1404.5997 (2014)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sofia Vallecorsa .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Vallecorsa, S. et al. (2018). Distributed Training of Generative Adversarial Networks for Fast Detector Simulation. In: Yokota, R., Weiland, M., Shalf, J., Alam, S. (eds) High Performance Computing. ISC High Performance 2018. Lecture Notes in Computer Science(), vol 11203. Springer, Cham. https://doi.org/10.1007/978-3-030-02465-9_35

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-02465-9_35

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-02464-2

  • Online ISBN: 978-3-030-02465-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics