Skip to main content

MSA: Multiphase Specifically Shared Arrays

  • Conference paper
Languages and Compilers for High Performance Computing (LCPC 2004)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 3602))

Abstract

Shared address space (SAS) parallel programming models have faced difficulty scaling to large number of processors. Further, although in some cases SAS programs are easier to develop, in other cases they face difficulties due to a large number of race conditions. We contend that a multi-paradigm programming model comprising a distributed-memory model with a disciplined form of shared-memory programming may constitute a “complete” and powerful parallel programming system. Optimized coherence mechanisms based on the specific access pattern of a shared variable show significant performance benefits over general DSM coherence protocols. We present MSA, a system that supports such specifically shared arrays that can be shared in read-only, write-many, and accumulate modes. These simple modes scale well and are general enough to capture the majority of shared memory access patterns. MSA does not support a general read-write access mode, but a single array can be shared in read-only mode in one phase and write-many in another. MSA coexists with the message-passing paradigm (MPI) and the processor virtualization-based message-driven paradigm(Charm++). We present the model, its implementation, programming examples and preliminary performance results.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Adve, S.V., Gharachorloo, K.: Shared memory consistency models: A tutorial. IEEE Computer 29(12), 66–76 (1996)

    Google Scholar 

  2. Bennett, J.K., Carter, J.B., Zwaenepoel, W.: Munin: Distributed shared memory based on type-specific memory coherence. In: Proc. of the Second ACM SIGPLAN Symp. on Principles and Practice of Parallel Programming (PPOPP 1990), pp. 168–177 (1990)

    Google Scholar 

  3. Bennett, J.K., Carter, J.B., Zwaenepoel, W.: Adaptive software cache management for distributed shared memory architectures. In: Tartalja, I., Milutinovic, V. (eds.) The cache coherence problem in shared memory multiprocessors: software solutions. IEEE Computer Society Press, Los Alamitos (1995)

    Google Scholar 

  4. Blume, W., Eigenmann, R., Faigin, K., Grout, J., Hoeflinger, J., Padua, D., Petersen, P., Pottenger, B., Rauchwerger, L., Tu, P., Weatherford, S.: Polaris: Improving the effectiveness of parallelizing compilers. In: LCPC 1994. LNCS, vol. 892, pp. 141–154. Springer, Heidelberg (1994)

    Chapter  Google Scholar 

  5. Carter, J.B., Bennett, J.K., Zwaenepoel, W.: Techniques for reducing consistency-related communications in distributed shared memory systems. ACM Transactions on Computers 13(3), 205–243 (1995)

    Article  Google Scholar 

  6. Cohen, A.: RDMA offers low overhead, high speed. Network World (March 2003), http://www.nwfusion.com/news/tech/2003/0324tech.html

  7. DeSouza, J., Kalé, L.V.: Jade: A parallel message-driven Java. In: Proc. Workshop on Java in Computational Science, held in conjunction with the International Conference on Computational Science (ICCS 2003), Melbourne, Australia and Saint Petersburg, Russian Federation (June 2003)

    Google Scholar 

  8. Fenton, W., Ramkumar, B., Saletore, V., Sinha, A., Kale, L.: Supporting machine independent programming on diverse parallel architectures. In: Proceedings of the International Conference on Parallel Processing, St. Charles, IL, Auguest 1991, pp. 193–201 (1991)

    Google Scholar 

  9. Huang, C., Lawlor, O., Kalé, L.V.: Adaptive MPI. In: Rauchwerger, L. (ed.) LCPC 2003. LNCS, vol. 2958. Springer, Heidelberg (2004)

    Chapter  Google Scholar 

  10. Hwang, Y.-S., Das, R., Saltz, J., Hodoscek, M., Brooks, B.: Parallelizing Molecular Dynamics Programs for Distributed Memory Machines. IEEE Computational Science & Engineering 2(2), 18–29 (1995)

    Article  Google Scholar 

  11. Iftode, L., Singh, J.P.: Shared virtual memory: Progress and challenges. Proc. of the IEEE, Special Issue on Distributed Shared Memory 87(3), 498–507 (1999)

    Google Scholar 

  12. Jyothi, R., Lawlor, O.S., Kale, L.V.: Debugging support for Charm++. In: PADTAD Workshop for IPDPS 2004, p. 294. IEEE Press, Los Alamitos (2004)

    Google Scholar 

  13. Kalé, L., Krishnan, S.: CHARM++: A Portable Concurrent Object Oriented System Based on C++. In: Paepcke, A. (ed.) Proceedings of OOPSLA 1993, September 1993, pp. 91–108. ACM Press, New York (1993)

    Chapter  Google Scholar 

  14. Kale, L.V., Krishnan, S.: Charm++: Parallel Programming with Message-Driven Objects. In: Wilson, G.V., Lu, P. (eds.) Parallel Programming using C++, pp. 175–213. MIT Press, Cambridge (1996)

    Google Scholar 

  15. Kale, L.V., Krishnan, S.: Charm++: Parallel Programming with Message-Driven Objects. In: Wilson, G.V., Lu, P. (eds.) Parallel Programming using C++, pp. 175–213. MIT Press, Cambridge (1996)

    Google Scholar 

  16. Keleher, P., Dwarkadas, S., Cox, A.L., Zwaenepoel, W.: Treadmarks: Distributed shared memory on standard workstations and operating systems. In: Proc. of the Winter 1994 USENIX Conference, pp. 115–131 (1994)

    Google Scholar 

  17. Koelbel, C., Mehrotra, P.: Compiling global name-space parallel loops for distributed execution. IEEE Trans. on Parallel and Distributed systems 2(4), 440–451 (1991)

    Article  Google Scholar 

  18. Nieplocha, J., Harrison, R.J., Littlefield, R.J.: Global arrays: A non-uniform-memory-access programming model for high-performance computers. Journal of Supercomputing 10, 169–189 (1996)

    Article  Google Scholar 

  19. Plimpton, S.J., Hendrickson, B.A.: A new parallel method for molecular-dynamics simulation of macromolecular systems. J. Comp. Chem. 17, 326–337 (1996)

    Article  Google Scholar 

  20. Saltz, J., Crowley, K., Mirchandaney, R., Berryman, H.: Run-time scheduling and execution of loops on message passing machines. Journal of Parallel and Distributed Computing 8, 303–312 (1990)

    Article  Google Scholar 

  21. Sinha, A., Kalé, L.: Information Sharing Mechanisms in Parallel Programs. In: Siegel, H. (ed.) Proceedings of the 8th International Parallel Processing Symposium, Cancun, Mexico, April 1994, pp. 461–468 (1994)

    Google Scholar 

  22. Yelick, K.A., Semenzato, L., Pike, G., Miyamoto, C., Liblit, B., Krishnamurthy, A., Hilfinger, P.N., Graham, S.L., Gay, D., Colella, P., Aiken, A.: Titanium: A high-performance Java dialect. Concurrency: Practice and Experience 10(11-13) (September – November 1998)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2005 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

DeSouza, J., Kalé, L.V. (2005). MSA: Multiphase Specifically Shared Arrays. In: Eigenmann, R., Li, Z., Midkiff, S.P. (eds) Languages and Compilers for High Performance Computing. LCPC 2004. Lecture Notes in Computer Science, vol 3602. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11532378_20

Download citation

  • DOI: https://doi.org/10.1007/11532378_20

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-28009-5

  • Online ISBN: 978-3-540-31813-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics