Skip to main content
Log in

Four styles of parallel and net programming

  • Review Article
  • Published:
Frontiers of Computer Science in China Aims and scope Submit manuscript

Abstract

This paper reviews the programming landscape for parallel and network computing systems, focusing on four styles of concurrent programming models, and example languages/libraries. The four styles correspond to four scales of the targeted systems. At the smallest coprocessor scale, Single Instruction Multiple Thread (SIMT) and Compute Unified Device Architecture (CUDA) are considered. Transactional memory is discussed at the multicore or process scale. The MapReduce style is examined at the datacenter scale. At the Internet scale, Grid Service Markup Language (GSML) is reviewed, which intends to integrate resources distributed across multiple datacenters.

The four styles are concerned with and emphasize different issues, which are needed by systems at different scales. This paper discusses issues related to efficiency, ease of use, and expressiveness.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Bryant R. Data-intensive supercomputing: the case for DISC. Technical Report CMU-CS-07-128, Carnegie Mellon University, 2007

  2. Chaiken R, Jenkins B, Larson P, et al. SCOPE: easy and efficient parallel processing of massive data sets. In: International Conference of Very Large Data Bases (VLDB). VLDB Endowment, 2008, 1265–1276

  3. Cooper B, Ramakrishnan R, et al. PNUTS: Yahoo!’s hosted data serving platform. In: International Conference of Very Large Data Bases (VLDB). VLDB Endowment, 2008, 1277–1278

  4. Dean J, Ghemawat S. MapReduce: simplified data processing on large clusters. In: Proceedings of the 6th Symposium on Operating Systems Design and Implementation. USENIX Association, 2004, 137–150

  5. Garland M, Grand S, Nickolls J, et al. Parallel computing experiences with CUDA. IEEE Micro, 2008, 28(4): 13–27

    Article  Google Scholar 

  6. Guerraoui R, Herlihy M, Pochon B. Polymorphic contention management. In: Proceedings of the 19th International Symposium on Distributed Computing. New York: Springer Verlag, 2005, 303–323

    Google Scholar 

  7. Harris T, Fraser T. Language support for lightweight transactions. In: Proceedings of the 8th Annual ACM SIGPLAN Conference on Object-oriented Programming, Systems, Languages, and Applications. New York: ACM Press, 2003, 388–402

    Chapter  Google Scholar 

  8. Harris T, Marlow S, Peyton-Jones S, et al. Composable memory transactions. In: Proceedings of the 10th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming. New York: ACM Press, 2005, 48–60

    Google Scholar 

  9. Herlihy M, Eliot J, Moss B. Transactional memory: architectural support for lock-free data structures. In: Proceedings of the 20th Annual International Symposium on Computer Architecture. New York: ACM Press, 1993, 289–300

    Chapter  Google Scholar 

  10. Herlihy M, Luchangco V, Moir M, Scherer III W N. Software transactional memory for dynamic-sized data structures. In: Proceedings of the 22nd Annual Symposium on Principles of Distributed Computing. New York: ACM Press, 2003, 92–101

    Google Scholar 

  11. Herlihy M, Nir Shavit. The art of multiprocessor programming. Morgan Kaufmann, 2008

  12. Hwang K. Xu Z. Scalable parallel computing: technology, architecture, programming. McGraw-Hill Science/Engineering/Math, 1998

  13. Khronos Group. The OpenCL specification, version 1.0. 12/08/2008

  14. Larus J, Kozyrakis C. Transactional memory. New York: Communication of the ACM, 2008, 51(7): 80–88

    Google Scholar 

  15. Larus R, Rajwar R. Transactional memory. Morgan & Claypool, 2006

  16. Liu X, Radenac Y, Banatre J, et al. A chemical interpretation of GSML programs. In: 7th International Conference on Grid and Cooperative Computing(GCC2008). 2008, 459–466

  17. Minh C, Trautmann M, Chung J, et al. An effective hybrid transactional memory system with strong isolation guarantees. In: Proceedings of the 34th International Symposium on Computer Architecture. New York: ACM Press, 2007, 69–80

    Google Scholar 

  18. NVIDIA. Corp. NVIDIA CUDA programming guide, version 2.0. 06/07/2008

  19. Owens J, Luebke D, et al. A survey of general-purpose computation on graphics hardware. Computer Graphics Forum. 2007, 26(1): 80–113

    Article  Google Scholar 

  20. Patterson D. The data center is the computer. New York: Communication of the ACM, 2008, 51(1): 105–105

    Google Scholar 

  21. Ryoo S, Rodrigues C, Baghsorkhi S, et al. Optimization principles and application performance evaluation of a multithreaded GPU using CUDA. In: Proceedings of the 13th Symposium on Principles and Practice of Parallel Programming. New York: ACMPress, 2008, 73–82

    Chapter  Google Scholar 

  22. Saha B, Adl-Tabatabai R, Jacobson Q. Architectural support for software transactional memory. In: Proceedings of the 39th International Symposium on Microarchitecture. 2006, 185–196

  23. Scherer III W N, Scott M L. Advanced contention management for dynamic software transactional memory. In: Proceedings of the 24th Annual ACM SIGACT-SIGOPS Symposium on Principles of Distributed Computing. New York: ACM Press, 2005, 240–248

    Chapter  Google Scholar 

  24. Shavit N, Touitou D. Software transactional memory. In: Proceedings of the 14th ACM Symposium on Principles of Distributed Computing. New York: ACM Press, 1995, 204–213

    Google Scholar 

  25. Shu C, Yu H, Liu H. Beap: an end-user agile programming paradigm for business applications. Journal of Computer Science and Technology, 2006, 21(4): 609–619

    Article  Google Scholar 

  26. Tarditi D, Puri S, et al. Accelerator: using data parallelism to program GPUs for general-purpose uses. In: Proceedings of the 12th International Conference on Architectural Support for Programming Languages and Operating Systems. 2006, 325–335

  27. Volkov V, Demmel J. Benchmarking GPUs to tune dense linear algebra. In: Proceedings of Conference on Supercomputing. IEEE Press, 2008

  28. Zaharia M, Konwinski A, Joseph A, et al. Improving MapReduce performance in heterogeneous environments. In: 8th Symposium on Operating Systems Design and Implementation. USENIX Association, 2008

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Zhiwei Xu or Yongqiang He.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Xu, Z., He, Y., Lin, W. et al. Four styles of parallel and net programming. Front. Comput. Sci. China 3, 290–301 (2009). https://doi.org/10.1007/s11704-009-0028-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11704-009-0028-0

Keywords

Navigation