Skip to main content

Early experiences with Olden

  • Conference paper
  • First Online:
Languages and Compilers for Parallel Computing (LCPC 1993)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 768))

Abstract

In an earlier paper[RRH92], we presented a new technique for the SPMD parallelization of programs that use dynamic data structures. Our approach is based on migrating the thread of computation to the processor that owns the data, and annotating the program with futures to introduce parallelism. We have implemented this approach for the Intel iPSC/860. This paper reports on our implementation, called Olden, and presents some early performance results from a series of non-trivial benchmarks.

Supported, in part, by a National Science Foundation Graduate Fellowship and the Fannie and John Hertz Foundation.

This work was supported, in part, by NSF Grant ASC-9110766, by DARPA and ONR under contracts N00014-91-J-4039, and by the Intel Supercomputer Systems Division.

The research supported, in part, by FCAR, NSERC, and the McGill Faculty of Graduate Studies and Research.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. F. Allen, M. Burke, P. Charles, R. Cytron, and J. Ferrante. An overview of the PTRAN analysis system for multiprocessing. J. of Parallel and Distributed Computing, 5:617–640, 1988.

    Article  Google Scholar 

  2. J.R. Allen and K. Kennedy. Automatic translation of FORTRAN programs to vector form ACM Transactions on Programming Languages and Systems, 9(4):491–542, October 1987.

    Article  Google Scholar 

  3. A. W. Appel and K. Li. Virtual memory primitives for user programs. In Proceedings of the Fourth International Conference on Architectural Support for Programming Languages and Operating Systems, pages 96–107, April 1991.

    Google Scholar 

  4. J.M. Anderson and M.S. Lam. Global optimizations for parallelism and locality on scalable parallel machines. In Proceedings of the SIGPLAN '93 Conference on Programming Language Design and Implementation, June 1993.

    Google Scholar 

  5. S.P. Amarasinghe and M.S.Lam. Communication optimization and code generation for distributed memory machines. In Proceedings of the SIG-PLAN '93 Conference on Programming Language Design and Implementation, June 1993.

    Google Scholar 

  6. Josh Barnes and Piet Hut. A hierarchical O(N log N) force-calculation algorithm. Nature, 324:446–449, December 1986.

    Article  Google Scholar 

  7. G. Bilardi and A. Nicolau. Adaptive bitonic sorting: An optimal parallel algorithm for shared-memory machines. SIAM Journal of Computing, 18(2):216–228, 1989.

    Article  Google Scholar 

  8. D. Bobrow and B. Wegbreit. A model and stack implementation of multiple environments. Communications of the ACM, 16(10):591–603, 1973.

    Article  Google Scholar 

  9. D. Callahan and K. Kennedy. Compiling programs for distributed memory multiprocessors. The Journal of Supercomputing, 2(2), October 1988.

    Google Scholar 

  10. C. Fraser and D. Hanson. A retargetable compiler for ANSI C. SIGPLAN Notices, 26(10):29–43, 1991.

    Google Scholar 

  11. C. Fraser. A code generation interface for ANSI C. Software-Practice & Experience, 21(9):963–988, 1991.

    Google Scholar 

  12. M. Gerndt. Automatic Parallelization for Distributed-Memory Multiprocessing Systems. PhD thesis, University of Bonn, 1990.

    Google Scholar 

  13. L. Guibas and J. Stolfi. General subdivisions and voronoi diagrams. ACM Transactions on Graphics, 4(2):74–123, 1985.

    Google Scholar 

  14. R. Gupta. SPMD execution of programs with dynamic data structures on distributed memory machines. In Proceedings of the 1992 International Conference on Computer Languages, pages 232–241, April 1992.

    Google Scholar 

  15. R. H. Halstead, Jr. Multilisp: A language for concurrent symbolic computation. ACM Transactions on Programming Languages and Systems, 7(4):501–538, October 1985.

    Article  Google Scholar 

  16. W. Horwat, A.A. Chien, and W.J. Dally. Experience with CST: Programming and implementation. In Proceedings of the SIGPLAN '89 Conference on Programming Language Design and Implementation, pages 101–108, June 1989.

    Google Scholar 

  17. L. J. Hendren. Parallelizing Programs with Recursive Data Structures. PhD thesis, Cornell University, January 1990.

    Google Scholar 

  18. L. J. Hendren, J. Hummel, and A. Nicolau. Abstractions for recursive pointer data structures: Improving the analysis and transformation of imperative programs. In Proceedings of the SIGPLAN '92 Conference on Programming Language Design and Implementation, June 1992.

    Google Scholar 

  19. S. Hiranandani, K. Kennedy, and C. Tseng. Compiler optimizations for FORTRAN D on MIMD distributed memory machines. In Proceedings of Supercomputing 91, pages 86–100, November 1991.

    Google Scholar 

  20. Guido Hogen and Rita Loogen. A new stack technique for the management of runtime structures in distributed implementations. Technical Report 3, RWTH Aachen, ogen,rita@zeus.informatik.rwth-aachen.de, 1993.

    Google Scholar 

  21. L. J. Hendren and A. Nicolau. Parallelizing programs with recursive data structures. IEEE Transactions on Parallel and Distributed Systems, 1(1), 1990.

    Google Scholar 

  22. C. Koelbel, P. Mehrotra, and J. van Rosendale. Supporting shared data structures on distributed memory architectures. In Proceedings of the Second ACM SIGPLAN Symposium on the Principles and Practice of Parallel Programming, 1990.

    Google Scholar 

  23. C. Koelbel. Compiling Programs for Nonshared Memory Machines. PhD thesis, Purdue University, West Lafayette, IN, August 1990.

    Google Scholar 

  24. James R. Larus. Restructuring Symbolic Programs for Concurrent Execution on Multiprocessors. PhD thesis, University of California, Berkeley, 1989.

    Google Scholar 

  25. J. R. Larus and P. N. Hilfinger. Detecting conflicts between structure accesses. In Proceedings of the SIGPLAN '88 Conference on Programming Language Design and Implementation, pages 21–34, June 1988.

    Google Scholar 

  26. James R. Larus and Paul N. Hilfinger. Restructuring Lisp programs for concurrent execution. In Proceedings of the ACM/SIGPLAN PPEALS 1988 — Parallel Programming: Experience with Applications, Languages and Systems, pages 100–110, July 1988.

    Google Scholar 

  27. D. T. Lee and B. J. Schachter. Two algorithms for constructing a delaunay triangulation. International Journal of Computer and Information Sciences, 9(3):219–242, 1980.

    Google Scholar 

  28. E. Mohr, D. A. Kranz, and R. H. Halstead, Jr. Lazy task creation: A technique for increasing the granularity of parallel programs. IEEE Transactions on Parallel and Distributed Systems, 2(3):264–280, July 1991.

    Google Scholar 

  29. D. Padua and M. Wolfe. Advanced compiler optimizations for supercomputers. Communications of the ACM, 29(12), December 1986.

    Google Scholar 

  30. A. Rogers. Compiling for Locality of Reference. PhD thesis, Cornell University, August 1990.

    Google Scholar 

  31. A. Rogers and K. Pingali. Process decomposition through locality of reference. In Proceedings of the SIGPLAN '89 Conference on Programming Language Design and Implementation, June 1989.

    Google Scholar 

  32. A. Rogers, J. H. Reppy, and L. J. Hendren. Supporting SPMD execution for dynamic data structures. In Conference Record of the Fifth Workshop on Languages and Compilers for Parallel Computing, August 1992. To appear as volume of Springer's Lecture Notes in Computer Science series.

    Google Scholar 

  33. M. Rosing, R. Schnabel, and R. Weaver. The DINO parallel programming language. Technical Report CU-CS-457-90, University of Colorado at Boulder, April 1990.

    Google Scholar 

  34. E. S. Roberts and M. T. Vandevoorde. WorkCrews: An abstraction for controlling parallelism. Technical Report 42, DEC Systems Research Center, April 1989.

    Google Scholar 

  35. M. Wolfe. Optimizing Supercompilers for Supercomputers. Pitman Publishing, London, 1989.

    Google Scholar 

  36. H. Zima, H. Bast, and M. Gerndt. SUPERB: A tool for semi-automatic MIMD/SIMD parallelization. Parallel Computing, 6(1):1–18, 1988.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Utpal Banerjee David Gelernter Alex Nicolau David Padua

Rights and permissions

Reprints and permissions

Copyright information

© 1994 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Carlisle, M.C., Rogers, A., Reppy, J.H., Hendren, L.J. (1994). Early experiences with Olden. In: Banerjee, U., Gelernter, D., Nicolau, A., Padua, D. (eds) Languages and Compilers for Parallel Computing. LCPC 1993. Lecture Notes in Computer Science, vol 768. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-57659-2_1

Download citation

  • DOI: https://doi.org/10.1007/3-540-57659-2_1

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-57659-4

  • Online ISBN: 978-3-540-48308-3

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics