Abstract
Training of Artificial Neural Networks in a Distributed Environment is considered and applied to a typical example in High Energy Physics interactive analysis. Promising results showing a reduction of the wait time from 5 hours to 5 minutes obtained in a local cluster with 64 nodes are described. Preliminary tests in a wide area network studying the impact of latency time are described; and the future work for integration in a GRID framework, that will be carried in the CrossGrid European Project, is outlined.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
I. Foster and C. Kesselman, editors. The Grid: Blueprint for a Future Computing Infraestucture. Morgan Kaufmann Publishers, 1999
DELPHI Collaboration. Search for the standard model Higgs boson at LEP in the year 2000. Phys. Lett. B 499:23–37, 2001 [hep-ex/0102036]
Manavendra Misra. Parallel Environments for Implementing Neural Networks. Neural Computing Survey, vol. 1., 48–60, 1997
D. Aberdeen, J. Baxter, R. Edwards. 98c/MFLOP Ultra-Large Neural Network Training on a PIII Cluster. Proceedings of Supercomputing 2000, November 2000
CrossGrid European Project (IST-2001-32243). http://www.eu-crossgrid.org
Kohonen, T. S elf-Organizing Maps. Springer, Berlin, Heidelberg, 1995
Broyden, Fletcher, Goldfarb, Shanno (BFGS) method. For example in Practical Methods of Optimization R.Fletcher. Wiley (1987)
MLPFIT: a tool for designing and using Multi-Layer Perceptrons. http://schwind.home.cern.ch/schwind/MLPfit.html
Physics Analysis Worstation. http://paw.web.cern.ch/paw/
T. Sjöstrand, Phys. Comm. 39 (1986) 347. Version 6.125 was used
Santander GRID Wall. http://grid.ifca.unican.es/sgw
Géant. http://www.dante.net/geant/
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2002 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Ponce, O. et al. (2002). Training of Neural Networks: Interactive Possibilities in a Distributed Framework. In: Kranzlmüller, D., Volkert, J., Kacsuk, P., Dongarra, J. (eds) Recent Advances in Parallel Virtual Machine and Message Passing Interface. EuroPVM/MPI 2002. Lecture Notes in Computer Science, vol 2474. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45825-5_15
Download citation
DOI: https://doi.org/10.1007/3-540-45825-5_15
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-44296-7
Online ISBN: 978-3-540-45825-8
eBook Packages: Springer Book Archive