Skip to main content

Parallel symbolic computing in Cid

  • Systems
  • Conference paper
  • First Online:
Parallel Symbolic Languages and Systems (PSLS 1995)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1068))

Included in the following conference series:

Abstract

We have designed and implemented a language called Cid for parallel applications with recursive linked data structures (e.g., lists, trees, graphs) and complex control structures (data dependent, recursion). Cid is unique in that, while targeting distributed memory machines, it attempts to preserve the traditional “MIMD threads plus lock-protected shared data” programming model that is standard on shared memory machines.

Cid is a small extension to C because although Lisp, functional and logic programming languages are attractive for symbolic computing, it appears nearly impossible to sway the momentum of C. Cid uses only a simple, platform-independent preprocessor for localized expansion of its few extensions; the preprocessed code is compiled using a standard C compiler, and linked with the Cid runtime system, which is written in standard C. By relying on standard C compilers, Cid exploits the continuing advances in optimizing C compilers and programming tools, and remains completely compatible with existing, even pre-compiled, C code.

Cid is designed for distributed address spaces because most truly scalable machines are likely to be multicomputers (even if individual nodes are shared-memory multiprocessors, or SMPs). However, Cid is not a message-passing language; it adheres to the traditional MIMD threads model with a shared, global, synchronized heap of objects. Cid has numerous design features to accomodate the relatively high communication costs of multicomputers, such as latency-tolerant multithreading, automatic load balancing and granularity control, automatic coherent object-caching, combined synchronization with object access, and asynchronous object pre-fetching, all of which we believe are necessary in any symbolic language implementation.

In this paper, we present an overview of Cid with examples using linked recursive data structures, explaining some of our design choices and aspects of its implementation. We present measurements of the cost of various Cid operations, and some preliminary observations about the effectiveness of Cid's automatic load balancing mechanisms. We also compare Cid with other approaches to developing parallel versions of C and C++ for distributed memory machines.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. A. Agarwal, R. Bianchini, D. Chaiken, K. L. Johnson, D. Kranz, J. Kubiatowicz, B.-H. Lim, K. Mackenzie, and D. Yeung. The MIT Alewife Machine: Architecture and Performance. In Proc. 22nd Ann. Intl. Symp. on Computer Architecture (ISCA), pages 2–13, June 1995.

    Google Scholar 

  2. H. E. Bal, M. F. Kaashoek, and A. E. Tanenbaum. Orca: A Language for Parallel Programming of Distributed Systems. IEEE Trans. on Software Engineering, pages 190–205, March 1992.

    Google Scholar 

  3. A. D. Birrell. An Introduction to Progamming with Threads. Technical Report 35, DEC Systems Research Center, 130 Lytton Ave., Palo Alto CA 94301, January 1989.

    Google Scholar 

  4. A. D. Birrell and B. J. Nelson. Implementing Remote Procedure Calls. ACM Trans. on Computer Systems, 2(1):39–59, February 1984.

    Google Scholar 

  5. R. D. Blumofe, C. F. Joerg, B. C. Kuszmaul, C. E. Leiserson, K. H. Randall, and Y. Zhou. Cilk: An Efficient Multithreaded Runtime System. In Proc. 5th. ACM Symp. on Principles and Practice of Parallel Programming (PPoPP), Santa Barbara, CA, pages 207–216, July 19–21 1995.

    Google Scholar 

  6. F. Bodin, P. Beckman, D. Gannon, S. Naranyana, and S. X. Yang. Distributed pC++: Basic Ideas for an Object Parallel Language. Scientific Programming, 2(3), Fall 1993.

    Google Scholar 

  7. H.-J. Boehm. Space Efficient Conservative Garbage Collection. In ACM Conf. on Programming Language Design and Implementation (PLDI), Albuquerque, NM, pages 197–206, June 23–25 1993.

    Google Scholar 

  8. M. C. Carlisle and A. Rogers. Software Caching and Computation Migration in Olden. In Proc. 5th ACM SIGPLAN Symp. on Principles and Practice of Parallel Programming (PPoPP), Santa Barbara, CA, pages 29–38, July 19–21 1995.

    Google Scholar 

  9. W. W. Carlson and J. M. Draper. Distributed Data Access in AC. In Proc. 5th ACM SIGPLAN Symp. on Principle and Practice of Parallel Programming (PPoPP), Santa Barbara, CA, pages 39–47, July 19–21 1995.

    Google Scholar 

  10. J. Chase, F. Amador, E. Lazowska, H. Levy, and R. Littlefield. The Amber System: Parallel Programming on a Network of Multiprocessors. In Proc. 12th. ACM Symp. on Oper. Syst. Principles, Litchfield Park, AZ, pages 147–158, Dec 1989.

    Google Scholar 

  11. D. E. Culler, A. Dusseau, S. C. Goldstein, S. Lumetta, T. von Eicken, and K. Yelick. Parallel Programming in Split-C. In Proc. Supercomputing 93, Portland OR, November 1993.

    Google Scholar 

  12. J. R. Ellis and D. L. Detlefs. Safe, efficient garbage collection for C++. Technical Report SRC Research Report 102, Digital Equip. Corp., Systems Research Center, 130 Lytton Avenue, Palo Alto, California 94301, USA, June 1993.

    Google Scholar 

  13. A. Geist, A. Begeulin, J. Dongarra, W. Jiang, R. Manchek, and V. S. Sundaram. PVM: Parallel Virtual Machine. A User's Guide and Tutorial for Network Parallel Computing. MIT Press, 1994.

    Google Scholar 

  14. R. H. Halstead. Multilisp: A Language for Concurrent Symbolic Computation. ACM Trans. on Programming Languages and Systems, 7(4):501–539, October 1985.

    Google Scholar 

  15. High Performance Fortran Forum. High Performance Fortran: Language Specification, Version 1.0, May 3 1993. Anonymous ftp: titan.cs.rice.edu.

    Google Scholar 

  16. C. Hoare. Monitors: an Operating System Structuring Concept. Communications of the ACM, 17(10):549–557, October 1974.

    Google Scholar 

  17. K. Hwang. Advanced Computer Architecture: Parallelism, Scalability, Programmability. McGraw-Hill, New York, 1993.

    Google Scholar 

  18. IEEE. 1003.1c: POSIX-System Application API-Threads and Extensions (Draft Standard D10), September 1994.

    Google Scholar 

  19. K. Johnson, F. M. Kaashoek, and D. A. Wallach. CRL: High-Performance All-Software Distributed Shared Memory. In Proc. 15th. Symp. on Operating Systems Principles (SOSP), December 1995.

    Google Scholar 

  20. L. V. Kale. Parallel Programming with CHARM: An Overview. Technical Report 93-8, Dept. of Computer Science, University of Illinois at Urbana-Champaign, 1993.

    Google Scholar 

  21. Kendall Square Research. Kendall Square Research Technical Summary, 1992.

    Google Scholar 

  22. J. Kuskin, D. Ofelt, M. Heinrich, J. Heinlein, R. Simoni, K. Gharachorloo, J. Chapin, D. Nakahira, J. Baxter, M. Horowitz, A. Gupta, M. Rosenblum, and J. Hennessy. The Stanford FLASH Multiprocessor. In Proc. Intl. Symp. on Computer Architecture, 1994.

    Google Scholar 

  23. D. Lenoski, J. Laudon, K. Gharachorloo, W.-D. Weber, A. Gupta, J. Hennessy, M. Horowitz, and M. S. Lam. The Stanford DASH Multiprocessor. IEEE Computer, pages 63–79, March 1992.

    Google Scholar 

  24. Message Passing Interface Forum. MPI: A Message-Passing Interface Standard, May 1994.

    Google Scholar 

  25. E. Mohr, D. Kranz, and R. Halstead Jr.. Lazy Task Creation: A Technique for Increasing the Granularity of Parallel Programs. IEEE Trans. on Parallel and Distributed Systems, 2(3):264–280, July 1991.

    Google Scholar 

  26. R. S. Nikhil. A Multithreaded Implementation of Id using P-RISC Graphs. In Proc. 6th. Ann. Wkshp. on Languages and Compilers for Parallel Computing, Portland, Oregon, Springer-Verlag LNCS 768, pages 390–405, August 12–14 1993.

    Google Scholar 

  27. R. S. Nikhil. The Cid System Home Page, 1995. At: http://wwh.research.digital.com/CRL/personal/nikhil/cid.

    Google Scholar 

  28. V. Saletore. A Distributed and Adaptive Load Balancing Scheme for Parallel Processing of Medium-Grain Tasks. In Proc. Fifth Distributed Memory Computing Conference (DMCC5), Charleston, SC, April 1990.

    Google Scholar 

  29. W. W. Shu and L. V. Kale. A Dynamic Load Balancing Strategy for the Chare Kernel System. In Proc. Supercomputing '89, pages 389–398, November 1989.

    Google Scholar 

  30. A. B. Sinha and L. V. Kale. A Load Balancing Strategy For Prioritized Execution of Tasks. In Proc. Intl. Symp. on Parallel Processing, Newport Beach, April 1993.

    Google Scholar 

  31. Sun Microsystems. Java, 1995. At: http://java.sun.com.

    Google Scholar 

  32. Thinking Machines Corporation. CM-5 Technical Summary, October 1991.

    Google Scholar 

  33. T. von Eicken, D. E. Culler, S. C. Goldstein, and K. E. Schauser. Active Messages: a Mechanism for Integrated Communication and Computation. In Proc. 19th. Ann. Intl. Symp. on Computer Architecture, Gold Coast, Australia, pages 256–266, May 1992.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Takayasu Ito Robert H. Halstead Jr. Christian Queinnec

Rights and permissions

Reprints and permissions

Copyright information

© 1996 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Nikhil, R.S. (1996). Parallel symbolic computing in Cid. In: Ito, T., Halstead, R.H., Queinnec, C. (eds) Parallel Symbolic Languages and Systems. PSLS 1995. Lecture Notes in Computer Science, vol 1068. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0023064

Download citation

  • DOI: https://doi.org/10.1007/BFb0023064

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-61143-1

  • Online ISBN: 978-3-540-68332-2

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics