Skip to main content

Experience with parallel symbolic applications in Orca

  • Systems
  • Conference paper
  • First Online:
Parallel Symbolic Languages and Systems (PSLS 1995)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1068))

Included in the following conference series:

  • 140 Accesses

Abstract

Orca is a simple, imperative parallel programming language, based on a form of distributed shared memory called shared data-objects. This paper discusses the suitability of Orca for parallel symbolic programming. Orca was not designed specifically for this application area, and it lacks several features supported in many languages for symbolic parallel computing, such as futures, automatic load balancing, and automatic garbage collection. On the other hand, Orca does give high-level support for sharing global state. Also, its implementation automatically distributes shared data (stored in shared objects).

We first give a comparison between Orca and two other models: imperative message-passing systems and functional languages. We do so by looking at several key issues in parallel programming and by studying how each of the three paradigms deals with these issues. Next, we describe our experiences with writing parallel symbolic applications in Orca. This work indicates that Orca is quite suitable for such applications.

This research is supported in part by a PIONIER grant from the Netherlands Organization for Scientific Research (N.W.O.).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. G.R. Andrews and R.A. Olsson. The SR Programming Language: Concurrency in Practice. The Benjamin/Cummings Publishing Company, Redwood City, CA, 1993.

    Google Scholar 

  2. I. Athanasiu and H.E. Bal. The Arc Consistency Problem: a Case Study in Parallel Programming with Shared Objects. In 7th International Conference on Parallel and Distributed Computing Systems, pages 816–821, October 1994.

    Google Scholar 

  3. H.E. Bal. Programming Distributed Systems. Prentice Hall Int'l, Hemel Hempstead, UK, 1991.

    Google Scholar 

  4. H.E. Bal and M.F. Kaashoek. Object Distribution in Orca using Compile-time and Run-Time Techniques. In Conference on Object-Oriented Programming Systems, Languages and Applications (OOPSLA '93), pages 162–177, September 1993.

    Google Scholar 

  5. H.E. Bal, M.F. Kaashoek, and A.S. Tanenbaum. Orca: A Language for Parallel Programming of Distributed Systems. IEEE Transactions on Software Engineering, 18(3): 190–205, March 1992.

    Google Scholar 

  6. H.E. Bal, J.G. Steiner, and A.S. Tanenbaum. Programming Languages for Distributed Computing Systems. ACM Comp. Surveys, 21(3):261–322, September 1989.

    Google Scholar 

  7. J.K. Bennett, J.B. Carter, and W. Zwaenepoel. Implementation and Performance of Munin. Proc. of the Thirteenth ACM Symposium on Operating System Principles, pages 152–164, October 1991.

    Google Scholar 

  8. R.A.F. Bhoedjang, T. Rühl, R. Hofman, K. Langendoen, H.E. Bal, and M.F. Kaashoek. Panda: A Portable Platform to Support Parallel Programming Languages. In Proceedings of the USENIX Symposium on Experiences with Distributed and Multiprocessor Systems (SEDMS IV), pages 213–226, San Diego, CA, USA, September 1993.

    Google Scholar 

  9. N. Carriero and D. Gelernter. How to Write Parallel Programs: A Guide to the Perplexed. ACM Computing Surveys, 21(3):323–357, September 1989.

    Google Scholar 

  10. K.M. Chandy and C. Kesselman. Compositional C++: Compositional Parallel Programming. Technical Report TR-92-13, California Institute of Technology, 1992.

    Google Scholar 

  11. J.M. Conrad and D. Agrawal. A Graph Partitioning — based Load Balancing Strategy for a Distributed-Memory Machine. In Int. Conf. on Parallel Processing (Vol. II), pages 74–81, August 1992.

    Google Scholar 

  12. The MPI Forum. MPI: A Message Passing Interface. Supercomputing'93, pages 878–883, November 1993.

    Google Scholar 

  13. I. Foster. Task Parallelism and High-Performance Languages. IEEE Parallel and Distributed Technology, pages 27–36, Fall 1994.

    Google Scholar 

  14. V.W. Freeh. A Comparison of Implicit and Explicit Parallel Programming. Technical Report TR-93-30a, The University of Arizona, Tucson AZ, June 1994.

    Google Scholar 

  15. V.W. Freeh, D.K. Lowenthal, and G.R. Andrews. Distributed Filaments: Efficient Fine-Grain Parallelism on a Cluster of Workstations. In First USENIX Symposium on Operating Systems Design and Implementation, pages 201–213, Monterey, CA, USA, November 1994.

    Google Scholar 

  16. S.C. Goldstein, K.E. Schauser, and D. Culler. Lazy Threads, Stacklets, and Synchronizers: Enabling Primitives for Parallel Languages. In B.K. Szymanski and B. Sinharoy, editors, Languages, Compilers and Run-Time Systems for Scalable Computers, Boston, MA, 1995. Kluwer Academic Publishers.

    Google Scholar 

  17. A.S. Grimshaw. Easy-to-Use Object-Oriented Parallel Processing with Mentat. IEEE Computer, 26(5):39–51, May 1993.

    Google Scholar 

  18. R.H. Halstead Jr. Implementation of Multilisp: Lisp on a multiprocessor. In Lisp and functional programming, pages 9–17, Austin, Texas, August 1984. ACM.

    Google Scholar 

  19. R.H. Halstead Jr.. Multilisp: A Language for Concurrent Symbolic Computation. ACM Trans. on Progr. Lang. and Syst., 7(4):501–538, October 1985.

    Google Scholar 

  20. S. Ben Hassen. Implementation of Nonblocking Collective Communication on a Large-Scale Switched Network. In J. van Katwijk, J.J. Gerbrands, M.R. van Steen, and J.F.M. Tonino, editors, Proceedings of the first annual conference of the Advanced School for Computing and Imaging (ASCI'95), pages 1–10, Heijen, the Netherlands, May 1995.

    Google Scholar 

  21. H.P. Heinzle, H.E. Bal, and K. Langendoen. Implementing Object-Based Distributed Shared Memory on Transputers. In A. De Gloria, M.R. Jane, and D. Marini, editors, Transputer Applications and Systems '94, pages 390–405, Villa Erba, Cernobbio, Como, Italy, September 1994. IOS Press.

    Google Scholar 

  22. C.A.R. Hoare. The Emperor's Old Clothes. Communications of the ACM, 24(2):75–83, February 1981.

    Google Scholar 

  23. P. Hudak. Exploring Parafunctional Programming: Separating the What from the How. IEEE Software, 5(1):54–61, January 1988.

    Google Scholar 

  24. M.F. Kaashoek. Group Communication in Distributed Computer Systems. PhD thesis, Vrije Universiteit Amsterdam, 1992.

    Google Scholar 

  25. V. Karamcheti and A.A. Chien. Concert — Efficient Runtime Support for Concurrent Object-Oriented. Supercomputing'93, pages 15–19, November 1993.

    Google Scholar 

  26. A.H. Karp. Programming for Parallelism. IEEE Computer, 20(5):43–57, May 1987.

    Google Scholar 

  27. K. Li and P. Hudak. Memory Coherence in Shared Virtual Memory Systems. ACM Transactions on Computer Systems, 7(4):321–359, November 1989.

    Google Scholar 

  28. A.K. Mackworth. Consistency in Networks of Relations. Artificial Intelligence, 8(1):99–118, February 1977.

    Google Scholar 

  29. E. Mohr, D.A. Kranz, and R.H. Halstead Jr.. Lazy task creation: A technique for increasing the granularity of parallel programs. IEEE transactions on parallel and distributed systems, 2(3):264–280, July 1991.

    Google Scholar 

  30. M. Oey, K. Langendoen, and H.E. Bal. Comparing Kernel-Space and User-Space Communication Protocols on Amoeba. In Proc. of the Fifteenth International Conference on Distributed Computing Systems, pages 238–245, Vancouver, BC, Canada, May 1995.

    Google Scholar 

  31. T. Rühl and H.E. Bal. Optimizing Atomic Functions using Compile-Time Information. In J. van Katwijk, J.J. Gerbrands, M.R. van Steen, and J.F.M. Tonino, editors, Proceedings of the first annual conference of the Advanced School for Computing and Imaging (ASCI'95), pages 407–416, Heijen, the Netherlands, May 1995.

    Google Scholar 

  32. E. Shapiro. The Family of Concurrent Logic Programming Languages. ACM Comp. Surveys, 21(3):413–510, Sept. 1989.

    Google Scholar 

  33. V.S. Sunderam. PVM: A Framework for Parallel Distributed Computing. Concurrency: Practice and Experience, 2(4):315–339, December 1990.

    Google Scholar 

  34. A.S. Tanenbaum, M.F. Kaashoek, and H.E. Bal. Parallel Programming using Shared Objects and Broadcasting. IEEE Computer, 25(8): 10–19, August 1992.

    Google Scholar 

  35. A.S. Tanenbaum, R. van Renesse, H. van Staveren, G.J. Sharp, S.J. Mullender, A.J. Jansen, and G. van Rossum. Experiences with the Amoeba Distributed Operating System. Communications of the ACM, 33(2):46–63, December 1990.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Takayasu Ito Robert H. Halstead Jr. Christian Queinnec

Rights and permissions

Reprints and permissions

Copyright information

© 1996 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Bal, H., Langendoen, K., Bhoedjang, R. (1996). Experience with parallel symbolic applications in Orca. In: Ito, T., Halstead, R.H., Queinnec, C. (eds) Parallel Symbolic Languages and Systems. PSLS 1995. Lecture Notes in Computer Science, vol 1068. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0023067

Download citation

  • DOI: https://doi.org/10.1007/BFb0023067

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-61143-1

  • Online ISBN: 978-3-540-68332-2

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics