Skip to main content

Computational Grid vs. Parallel Computer for Coarse-Grain Parallelization of Neural Networks Training

  • Conference paper
On the Move to Meaningful Internet Systems 2005: OTM 2005 Workshops (OTM 2005)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 3762))

Abstract

Development of a coarse-grain parallel algorithm of artificial neural networks training with dynamic mapping onto processors of parallel computer system is considered in this paper. Parallelization of this algorithm done on the computational grid operated under Globus middleware is compared with the results obtained on the parallel computer Origin 300. Experiments show better efficiency for computational grid instead of parallel computer with an efficiency/price criterion.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 139.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Haykin, S.: Neural Networks. Prentice Hall, New Jersey (1999)

    MATH  Google Scholar 

  2. Mahapatra, S., Mahapatra, R., Chatterji, B.: A parallel formulation of back-propagation learning on distributed memory multiprocessors. Parallel Comp. 22(12), 1661–1675 (1997)

    Article  MATH  MathSciNet  Google Scholar 

  3. Hanzálek, Z.: A parallel algorithm for gradient training of feed-forward neural networks. Parallel Computing 24(5-6), 823–839 (1998)

    Article  MATH  Google Scholar 

  4. Chang, L.-C., Chang, F.-J.: An efficient parallel algorithm for LISSOM neural network. Parallel Computing 28(11), 1611–1633 (2002)

    Article  MATH  Google Scholar 

  5. Estévez, P.A., Paugam-Moisy, H., Puzenat, D., et al.: A scalable parallel algorithm for training a hierarchical mixture of neural experts. Parallel Computing 28(6), 861–891 (2002)

    Article  MathSciNet  Google Scholar 

  6. Murre, J.M.J.: Transputers and neural networks: An analysis of implementation constraints and performance. IEEE Transactions on Neural Networks 4(2), 284–292 (1993)

    Article  Google Scholar 

  7. So, J.J.E., Downar, T.J., Janardhan, R., et al.: Mapping conjugate gradient algorithms for neutron diffusion: applications onto SIMD, MIMD, and mixed-mode machines. International Journal of Parallel Programming 26(2), 183–207 (1998)

    Article  Google Scholar 

  8. Sudhakar, V., Siva Ram Murthy, C.: Efficient mapping of backpropagation algorithm onto a network of workstations. IEEE Trans. on Syst., Man and Cyber B28(6), 841–848 (1998)

    Article  Google Scholar 

  9. Dongarra, J., Shimasaki, M., Tourancheau, B.: Clusters and computational grids for scientific computing. Parallel Computing 27(11), 1401–1402 (2001)

    Article  MATH  Google Scholar 

  10. Foster, I., Kesselman, C., Tuecke, S.: The anatomy of the Grid: enabling scalable virtual organizations. International Journal of Supercomputer Application 15(3) (2001)

    Google Scholar 

  11. Laforenza, D.: Grid programming: some indications where we are headed. Parallel Computing 28(12), 1733–1752 (2002)

    Article  MATH  Google Scholar 

  12. Barberou, N., Garbey, M., Hess, M., Resch, M., Rossi, T., Toivanen, J.: Efficient metacomputing of elliptic linear and non-linear problems. J. Par. Distr. Comp. 63(5), 564–577 (2003)

    Article  MATH  Google Scholar 

  13. Cooperman, G., Casanova, H., Hayes, J., et al.: Using TOP-C and AMPIC to port large parallel applications to the Computation Grid. Fut. Gen. Comp. Sys. 19(4), 587–596 (2003)

    Article  Google Scholar 

  14. Takemiya, H., Shudo, K., Tanaka, Y., Sekiguchi, S.: Constructing Grid Applications Using Standard Grid Middleware. Journal of Grid Computing 1(2), 117–131 (2003)

    Article  Google Scholar 

  15. Prodan, R., Fahringer, T.: ZENTURIO: a grid middleware-based tool for experiment management of parallel and distributed applications. J. Par. Distr. Comp. 64(6), 693–707 (2004)

    Article  Google Scholar 

  16. Frey, J., Tannenbaum, T., Livny, M., Foster, I., Tuecke, S.: Condor-G: A Computation Management Agent for Multi-Institutional Grids. Cluster Computing 5, 237–246 (2002)

    Article  Google Scholar 

  17. Karonis, N.T., Toonen, B., Foster, I.: MPICH-G2: A Grid-enabled implementation of the Message Passing Interface. J. Par. Distr. Comp. 63(5), 551–563 (2003)

    Article  MATH  Google Scholar 

  18. Foster, I., Kesselman, C.: Globus: a metacomputing infrastructure toolkit. International Journal of Supercomputer Application 11(2), 115–128 (1997)

    Article  Google Scholar 

  19. Pickles, S.M., Brooke, J., Costen, F.C., Gabriel, E., Mueller, M., Resch, M., et al.: Metacomputing across intercontinental networks. Fut. Gen. Comp. Sys. 17(8), 911–918 (2001)

    Article  MATH  Google Scholar 

  20. Turchenko, V.: Parallel Algorithm of Dynamic Mapping of Integrating Historical Data Neural Networks. Information Technologies and Systems 7(1), 45–52 (2004)

    Google Scholar 

  21. Iyengar, S.S.: Distributed Sensor Network - Introduction to the Special Section. Transaction on Systems, Man, and Cybernetics 21(5), 1027–1031 (1991)

    Google Scholar 

  22. Brignell, J.: Digital compensation of sensors. Scient. Instruments. 20(9), 1097–1102 (1987)

    Article  Google Scholar 

  23. Sachenko, A., Kochan, V., Turchenko, V., Golovko, V., Savitsky, J., Laopoulos, T.: Method of the training set formation for neural network predicting drift of data acquisition device. Patent #50380. IPC 7 G06F15/18. Ukraine. Filled 14 (January 04 2000) Issued (November 15 2002)

    Google Scholar 

  24. Sachenko, A., Kochan, V., Turchenko, V.: Instrumentation for Data Gathering. IEEE Instrumentation and Measurement Magazine 6(3), 34–40 (2003)

    Article  Google Scholar 

  25. Turchenko, V.: Neural network-based methods and means for efficiency improving of distributed sensor data acquisition and processing networks. Ph.D. dissert. National University Lvivska Politechnika, Lviv, Ukraine, 188 (2001)

    Google Scholar 

  26. Turchenko, V., Kochan, V., Sachenko, A., Laopoulos, T.: Enhanced method of historical data integration using neural networks. Sensors and Systems 7(38), 35–38 (2002)

    Google Scholar 

  27. Happel, B., Murre, J.: Design and evolution of modular neural network architectures. Neural Networks 7, 985–1004 (1994)

    Article  Google Scholar 

  28. Petrowski, A., Dreyfus, G., Girault, C.: Performance analysis of a pipelined back-propagation parallel algorithm. IEEE Trans. Neural Networks 4, 970–981 (1993)

    Article  Google Scholar 

  29. Paugam-Moisy, H.: Optimal speedup conditions for a parallel back-propagation algorithm. In: Valverde, L., Bouchon-Meunier, B., Yager, R.R. (eds.) IPMU 1992. LNCS, vol. 682, pp. 719–724. Springer, Heidelberg (1993)

    Google Scholar 

  30. Hopp, H., Prechelt, L.: CuPit-2: A Portable parallel programming language for artificial neural networks. In: Proc. 15th IMACS World Congress of Scientific Computation Modeling and Applied Mathematics, Berlin, vol. 6, pp. 493–498 (1997)

    Google Scholar 

  31. Dongarra, J., Laforenza, D., Orlando, S. (eds.): EuroPVM/MPI 2003. LNCS, vol. 2840. Springer, Berlin (2003), ISBN 3-540-20149-1

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2005 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Turchenko, V. (2005). Computational Grid vs. Parallel Computer for Coarse-Grain Parallelization of Neural Networks Training. In: Meersman, R., Tari, Z., Herrero, P. (eds) On the Move to Meaningful Internet Systems 2005: OTM 2005 Workshops. OTM 2005. Lecture Notes in Computer Science, vol 3762. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11575863_55

Download citation

  • DOI: https://doi.org/10.1007/11575863_55

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-29739-0

  • Online ISBN: 978-3-540-32132-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics