Skip to main content

A new architecture design paradigm for parallel computing in scheme

  • Conference paper
  • First Online:
Parallel Symbolic Computing: Languages, Systems, and Applications (PSC 1992)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 748))

Included in the following conference series:

  • 141 Accesses

Abstract

This paper describes a new architecture design paradigm that radically reassigns various system responsibilities among the compiler, operating system, and architecture in order to simplify the design and increase the performance of parallel computing systems. Implementation techniques for latently typed languages like Scheme are enhanced and used to support compiler-enforced memory protection and compiler-controlled exception handling. Hardware design complexity is greatly reduced and hardware modularity is increased by not only eliminating the need to implement exception handling in the processor state machine, but also by eliminating global control altogether. In the absence of global control, techniques such as pipelining and multiple contexts that exploit instruction-level and thread-level parallelism can be used together without the usual processor complexity problems, to increase the efficiency of parallel systems. Complexity is reduced and efficiency is increased at the software level as well. The use of compiler-enforced memory protection and a single shared system-wide virtual address space increases inter-thread communication efficiency as well as inter-thread protection resulting in threads that not only are light-weight but also enjoy the protection guarantees of heavy-weight threads.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Agarwal, A., Lim, B.-H., Kranz, D., and Kubiatowicz, J. APRIL: A processor architecture for multiprocessing. In The 17th Annual International Symposium on Computer Architecture (Seattle, June 1990), pp. 104–114.

    Google Scholar 

  2. Alverson, R., Callahan, D., Cummings, D., Koblenz, B., Porterpield, A., and Smith, B. The Tera computer system. In 1990 International Conference on Supercomputing (June 1990), pp. 1–6.

    Google Scholar 

  3. Anderson, T. E., Levy, H. M., Bershad, B. N., and Lazowska, E. D. The interaction of architecture and operating system design. In Fourth International Conference on Architectural Support for Programming Languages and Operating Systems (Apr. 1991), pp. 108–120.

    Google Scholar 

  4. Arvind, and Iannucci, R. A critique of multiprocessing von Neumann style. In The 10th Annual International Symposium on Computer Architecture (Stockholm, 1983).

    Google Scholar 

  5. Berstis, V. Security and protection of data in the IBM system/38. In The 7th Annual Symposium on Computer Architecture (1980), pp. 245–252.

    Google Scholar 

  6. Bruggeman, C. An Architecture Design Paradigm for Type-Safe Languages. PhD thesis, Indiana University, 1993. in preparation.

    Google Scholar 

  7. Clinger, W., Rees, J. A., et al. The revised 4 report on the algorithmic language Scheme. LISP Pointers 4, 3 (1991).

    Google Scholar 

  8. Colwell, R. P., Nix, R. P., O'Donnell, J. J., Papworth, D. B., and Rodman, P. K. A VLIW architecture for a trace scheduling compiler. In Second International Conference on Architectural Support for Programming Languages and Operating Systems (1987), pp. 180–192.

    Google Scholar 

  9. Dennis, J. B., and Horn, E. C. V. Programming semantics for multiprogrammed computations. Communications of the ACM 9, 3 (Mar. 1966).

    Article  Google Scholar 

  10. Dybvig, R. K., Eby, D., and Bruggbman, C. Flexible and efficient storage management using a segmented heap, in preparation.

    Google Scholar 

  11. Gabriel, R., and McCarthy, J. Queue-based multi-processing Lisp. In Proceedings of the 1984 ACM Conference on Lisp and Functional Programming (Aug. 1984), pp. 25–44.

    Google Scholar 

  12. Grafe, V., Davidson, G., Hoch, J., and Holmes, V. The Epsilon dataflow processor. In The 16th Annual International Symposium on Computer Architecture (Jerusalem, May 1989), pp. 36–45.

    Google Scholar 

  13. Guttman, J. D., Monk, L. G., Ramsdell, J. D., Farmer, W. M., and Swarup, V. A guide to vlisp, a verified programming language implementation. Tech. Rep. M92B091, Sept. 1992.

    Google Scholar 

  14. Halstead, Jr., R. H. Multilisp: A language for concurrent symbolic computation. ACM Transactions on Programming Languages and Systems 7, 4 (Oct. 1985), 501–538.

    Article  MATH  Google Scholar 

  15. Halstead, Jr., R. H., and Fujita, T. Masa: A multithreaded processor architecture for parallel symbolic computing. In The 15th Annual International Symposium on Computer Architecture (Honolulu, 1988), pp. 443–451.

    Google Scholar 

  16. Harrison, W. L. The interprocedural analysis and automatic parallelization of Scheme programs. Lisp and Symbolic Computation 2, 3/4 (1989).

    Article  Google Scholar 

  17. Hennessy, J. L., and Patterson, D. A.Computer Architecture: A Quantitative Approach. Morgan Kaufmann, San Mateo, California, 1990.

    Google Scholar 

  18. Hieb, R., and Dybvig, R. K. Continuations and concurrency. In Proceedings of the Second ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (Mar. 1990), pp. 128–137.

    Google Scholar 

  19. Hieb, R., Dybvig, R. K., and Bruggeman, C. Representing control in the presence of first-class continuations. In Proceedings of the SIGPLAN '90 Conference on Programming Language Design and Implementation (June 1990), pp. 66–77.

    Google Scholar 

  20. Jacobs, G. M., and Brodersen, R. W. A fully asynchronous digital signal processor using self-timed circuits. IEEE Journal of Solid-State Circuits 25, 6 (Dec. 1990).

    Article  Google Scholar 

  21. Jagannathan, S., and Philbin, J. A foundation for an efficient multi-threaded Scheme system. In Proceedings of the 1992 ACM Conference on Lisp and Functional Programming (June 1992), pp. 345–357.

    Google Scholar 

  22. Koldinger, E. J., Chase, J. S., and Eggers, S. J. Architectural support for single address space operating systems. In Fifth International Conference on Architectural Support for Programming Languages and Operating Systems (Boston, Sept. 1992), pp. 175–186.

    Google Scholar 

  23. Kowalik, J., Ed. Parallel MIMD Computation: HEP Supercomputer and Its Applications. M.I.T Press, Cambridge, Mass., 1985.

    Google Scholar 

  24. Kuehn, J., and Smith, B. The Horizon supercomputer system: Architecture and software. In 1988 International Conference on Supercomputing (Orlando, Nov. 1988).

    Google Scholar 

  25. Kung, H. T. Why systolic architectures? IEEE Computer 15, 1 (Jan. 1982), 65–90.

    Google Scholar 

  26. Laudon, J., Gupta, A., and Horowitz, M. Architectural and implementation tradeoffs in the design of multiple-context processors. Tech. Rep. CSL-TR-92-523, Stanford University, 1992.

    Google Scholar 

  27. Levy, H. M. Capability-Based Computer Systems. Digital Press, 1984.

    Google Scholar 

  28. Martin, A. J. Compiling communicating processes into delay-insensitive VLSI circuits. Distributed Computing 1 (1986), 226–234.

    Article  MATH  Google Scholar 

  29. Martin, A. J., Burns, S. M., Lee, T. K., Borkovic, D., and Hazewindus, P. J. The design of an asynchronous microprocessor. In Proceedings of the Decennial Caltech Conference on VLSI (Mar. 1989).

    Google Scholar 

  30. Molnar, C. E., Fang, T.-P., and Rosenberger, F. U. Synthesis of delayinsensitive modules. Journal of Distributed Computing (1986), 218–262.

    Google Scholar 

  31. Nikhil, R., and Arvind. Can dataflow subsume von Neumann computing. In The 16th Annual International Symposium on Computer Architecture (Jerusalem, May 1989), pp. 262–272.

    Google Scholar 

  32. OSF architecture-neutral distribution format rationale. Tech. rep., June 1991.

    Google Scholar 

  33. Organick, E. I. A Programmer's View of the Intel 432 System. McGraw-Hill, 1983.

    Google Scholar 

  34. Ousterhout, J. K., Scelza, D. A., and Sidhu, P. S. Medusa: An experiment in distributed operating system structure. Communications of the ACM 23, 2 (Feb. 1980).

    Article  Google Scholar 

  35. Papadopoulos, G. M., and Culler, D. E. Monsoon: an explicit token-store architecture. In The 17th Annual International Symposium on Computer Architecture (Seattle, June 1990), pp. 82–91.

    Google Scholar 

  36. Patterson, D. A. Reduced instruction set computers. Communications of the ACM 28, 1 (Jan. 1985).

    Article  Google Scholar 

  37. Sakai, S., Yamaguchi, Y., Hiraki, K., and Yuba, T. An architecture of a dataflow single chip processor. In The 16th Annual International Symposium on Computer Architecture (Jerusalem, May 1989), pp. 46–53.

    Google Scholar 

  38. Scheifler, R. W., and Gettys, J. The X windows system. ACM Transactions on Graphics 5, 2 (Apr. 1986).

    Article  Google Scholar 

  39. Seitz, C. L. Ensemble architectures for VLSI — a survey and taxonomy. In 1982 Conference on Advanced Research in VLSI (M.I.T., Jan. 1982), pp. 130–135.

    Google Scholar 

  40. Seitz, C. L. Concurrent VLSI architectures. IEEE Transactions on Computers c-33, 12 (Dec. 1984).

    Google Scholar 

  41. Strong, J., Wegstein, J., Tritter, A., Olsztyn, J., Mock, O., and Steel, T. The problem of programming communication with changing machines: A proposed solution. Communications of the ACM 1, 8 (Aug. 1958), 12–13. part 1.

    Article  Google Scholar 

  42. Strong, J., Wegstein, J., Tritter, A., Olsztyn, J., Mock, O., and Steel, T. The problem of programming communication with changing machines: A proposed solution. Communications of the ACM 1, 9 (Sept. 1958), 9–15. part 2.

    Google Scholar 

  43. Sutherland, I. E. Micropipelines. Communications of the ACM 32, 6 (June 1989).

    Article  Google Scholar 

  44. Sutherland, I. E., and Mead, C. A. Microelectronics and computer science. Scientific American 237 (Sept. 1977), 210–228.

    Google Scholar 

  45. Thornton, J. E. Design of a computer: the Control Data 6600. 1970.

    Google Scholar 

  46. Wall, D. W. Experience with a software-defined machine architecture. ACM Transactions on Programming Languages and Systems 14, 3 (July 1992), 299–338.

    Article  Google Scholar 

  47. Weber, W.-D., and Gupta, A. Exploring the benefits of multiple hardware contexts in a multiprocessor architecture: Preliminary results. In The 16th Annual International Symposium on Computer Architecture (Jerusalem, May 1989), pp. 273–280.

    Google Scholar 

  48. Wilkes, M. V., and Needham, R. M.The Cambridge CAP Computer and its Operating System. North Holland, New York, 1979.

    Google Scholar 

  49. Winkle, D., and Prosser, F. The Art of Digital Design. 1900.

    Google Scholar 

  50. Wulf, W. A., Levin, S. P., and Pierson, C.HYDRA/C.mmp: An Experimental Computer System. McGraw-Hill, New York, 1981.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Robert H. Halstead Jr. Takayasu Ito

Rights and permissions

Reprints and permissions

Copyright information

© 1993 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Bruggeman, C., Dybvig, R.K. (1993). A new architecture design paradigm for parallel computing in scheme. In: Halstead, R.H., Ito, T. (eds) Parallel Symbolic Computing: Languages, Systems, and Applications. PSC 1992. Lecture Notes in Computer Science, vol 748. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0018665

Download citation

  • DOI: https://doi.org/10.1007/BFb0018665

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-57396-8

  • Online ISBN: 978-3-540-48133-1

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics