Skip to main content

Design of a meta-parallelizer for large scientific applications

  • Conference paper
  • First Online:
Parallel Processing: CONPAR 94 — VAPP VI (VAPP 1994, CONPAR 1994)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 854))

  • 133 Accesses

Abstract

The ”classical parallelizers” integrate more and more sophisticated and costly parallelization techniques. As a result, they are limited by the size of the program they are able to compile and are not well suited to parallelize real scientific programs whose parallelism detection complexity may differ very much from one code fragment to another.

The goal of our work is the construction of a meta-parallelizer for shared memory MIMD machines, LPARAD (Large applications, PARallelizer, ADaptative), able to efficiently parallelize large scientific programs with minimum help from the user in a minimal time. The implementation of LPARAD within the scope of the PAF [PAF90] project validates our approach.

This research was supported in part by Centre de Recherche en Informatique de Montréal, 1801 avenue McGill College, Bureau 800, Montréal (Québec) H3A 2N4 Canada.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. F. Allen, M. Burke, P. Charles, R. Cytron, and J. Ferrante. An overview of the ptran analysis system for multiprocessing. Technical Report RC 13115 (#56866), New-York, September 1987.

    Google Scholar 

  2. Jean-Yves Berthou. Construction d'un paralléliseur interactif de logiciels scientifiques de grande taille guídé par des mesures de performances. PhD thesis, Université P. et M. Curie, October 1993.

    Google Scholar 

  3. Jean-Yves Berthou and Philippe Klein. Estimating the effective performance of program parallelization on shared memory mimd multiprocessors. In Parallel Processing: CONPAR 92 — VAPP V, pages 701–706. Springer Verlag, 1992.

    Google Scholar 

  4. J.-F. Collard. Space-time transformation of while-loops using speculative execution. In Proc. of the 1994 Scalable High Performance Conputing Conf., Knoxville, Tenn., May 1994. To appear. Also available as LIP Report 93-38.

    Google Scholar 

  5. Alain Dumay. Traitement des Indexations non linéaires en parallélisation automatique: une méthode de linéarisation contextuelle. PhD thesis, Université P. et M. Curie, December 1992.

    Google Scholar 

  6. Nahid Emad. Détection de parallélisme à l'aide de paralléliseurs automatiques. Technical Report MASI 92-54, Institut Blaise Pascal, September 1992.

    Google Scholar 

  7. T. Fahringer. Automatic Performance Prediction for Parallel Programs on Massively Parallel Computers. PhD thesis, Dept of Comp. Science., University of Vienna, November 1993.

    Google Scholar 

  8. Paul Feautrier. Parametric integer programming. RAIRO Recherche Opérationnelle, 22:243–268, September 1988.

    Google Scholar 

  9. T. Fahringer and C. Huber. The weight finder, a profiler for fortran 77 programs. user manual. Technical report, Dept of Comp. Science., University of Vienna, September 1992.

    Google Scholar 

  10. François Irigoin, Pierre Jouvelot, and Rémi Triolet. Overview of the pips project. In Paul Feautrier and François Irigoin, editors, Procs of the Int. Workshop on Compiler for Parallel Computers, Paris, pages 199–212, December 1990.

    Google Scholar 

  11. Pacific Sierra Research Corporation. Vast2 Reference Manual, 1986.

    Google Scholar 

  12. Manuel de référence de paf. Groupe “Calcul Parallèle” du MASI, January 1990. available on request from Paul Feautrier.

    Google Scholar 

  13. William Pugh. Uniform techniques for loop optimization. ACM Conf. on Supercomputing, pages 341–352, January 1991.

    Google Scholar 

  14. Xavier Redon. Détection des réductions. Technical Report 92.52, IBP/MASI, September 1992.

    Google Scholar 

  15. Mourad Raji-Werth and Paul Feautrier. Systematic construction of programs for distributed memory systems. In Paul Feautrier and François Irigoin, editors, Procs of the Int. Workshop on Compiler for Parallel Computers, Paris, December 1990.

    Google Scholar 

  16. Rémi Triolet, François Irigoin, and Paul Feautrier. Direct parallization of call statements. In ACM Symposium on Compiler Construction, 1986.

    Google Scholar 

  17. D. Windheiser. Optimisation de la localité des données et du parallélisme à grain fin. PhD thesis, Rennes I, 1992.

    Google Scholar 

  18. M. E. Wolf and M. S. Lam. A loop transformation theory and an algorithm to maximize parallelism. Transactions on Parallel and Distributed Systems, 2(4), pages 452–470, October 1991.

    Google Scholar 

  19. M. Wolfe. Iteration space tiling for memory hierarchies. In Parallel Processing for Scientific Computing, pages 357–361. SIAM, 1987.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Bruno Buchberger Jens Volkert

Rights and permissions

Reprints and permissions

Copyright information

© 1994 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Berthou, JY. (1994). Design of a meta-parallelizer for large scientific applications. In: Buchberger, B., Volkert, J. (eds) Parallel Processing: CONPAR 94 — VAPP VI. VAPP CONPAR 1994 1994. Lecture Notes in Computer Science, vol 854. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-58430-7_57

Download citation

  • DOI: https://doi.org/10.1007/3-540-58430-7_57

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-58430-8

  • Online ISBN: 978-3-540-48789-0

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics