Skip to main content

Collective Communication

  • Reference work entry
Encyclopedia of Parallel Computing

Synonyms

Group communication; Inter-process communication

Definition

Collective communication is communication that involves a group of processing elements (termed nodes in this entry) and effects a data transfer between all or some of these processing elements. Data transfer may include the application of a reduction operator or other transformation of the data. Collective communication functionality is often exposed through library interfaces or language constructs. Collective communication is a natural extension of the message-passing paradigm.

Discussion

Introduction

Many commonly encountered communication patterns and computational operations involving data distributed across sets of processing elements (nodes) can be represented as collective communicationin which all nodes in a (sub)set of nodes collaborate to carry out a specific data redistribution or data reduction operation. Making such operations available in parallel programming languages, interfaces, or libraries has a...

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 1,600.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 1,799.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Bibliography

  1. Bala V, Bruck J, Cypher R, Elustondo P, Ho A, Ho C-T, Kipnis S, Snir M (1995) CCL: A portable and tunable collective communications library for scalable parallel computers. IEEE Trans Parallel Distrib Syst 6(2):154–164

    Article  Google Scholar 

  2. Bonorden O, Juurlink BHH, von Otte I, Rieping I (2003) The Paderborn University BSP (PUB) library. Parallel Comput 29(2):187–207

    Article  Google Scholar 

  3. Chan E, Heimlich M, Purkayastha A, van de Geijn RA (2007) Collective communication: theory, practice, and experience. Concurr Comput Pract Exp 19(13):1749–1783

    Article  Google Scholar 

  4. Dean J, Ghemawat S (2008) MapReduce: simplified data processing on large clusters. Commun ACM 51(1):107–113

    Article  Google Scholar 

  5. El-Ghazawi T, Carlson W, Sterling T, Yelick K (2005) UPC: distributed shared memory programming. Wiley, Hoboken

    Book  Google Scholar 

  6. Fox G, Johnson M, Lyzenga G, Otto S, Salmon J, Walker D (1988) Solving problems on concurrent processors, vol 1. Prentice-Hall, Englewood Cliffs

    Google Scholar 

  7. Gorlatch S (2004) Send-receive considered harmful: Myths and realities of message passing. ACM Trans Program Lang Syst 26(1):47–56

    Article  Google Scholar 

  8. Goudreau M, Lang K, Rao SB, Suel T, Tsantilas T (1999) Portable and efficient parallel computing using the BSP model. IEEE Trans Comput 48(7):670–689

    Article  Google Scholar 

  9. Hambrusch SE, Hameed F, Khokhar AA (1995) Communication operations on coarse-grained mesh architectures. Parallel Comput 21(5):731–752

    Article  MATH  Google Scholar 

  10. Hempel R, Hey AJG, McBryan O, Walker DW (1994) Special issue: message passing interfaces. Parallel Comput 20(4):415–678

    Article  Google Scholar 

  11. Hill JMD, McColl B, Stefanescu DC, Goudreau MW, Lang K, Rao SB, Suel T, Tsantilas T, Bisseling RH (1998) BSPlib: The BSP programming library. Parallel Comput 24(14):1947–1980

    Article  Google Scholar 

  12. Mellor-Crummey J, Adhianto L, Scherer III WN, Jin G (2009) A new vision for Coarray Fortran. In: Third conference on Partitioned Global Address Space Programming Models, Asburn, VA

    Google Scholar 

  13. Mitra P, Payne DG, Schuler L, van de Geijn R (1995) Fast collective communication libraries, please. In: Proceedings of the Intel supercomputer users’ group meeting

    Google Scholar 

  14. MPI Forum. MPI: A message-passing interface standard, version 2.2. 4 Sept 2009. www.mpi-forum.org

  15. Rabhi FA, Gorlatch S (eds) (2003) Patterns and skeletons for parallel and distributed computing. Springer-Verlag, London

    MATH  Google Scholar 

  16. Saad Y, Schultz MH (1989) Data communication in parallel architectures. Parallel Comput 11(2):131–150

    Article  MATH  MathSciNet  Google Scholar 

  17. Saraswat VA, Sarkar V, von Praun C (2007) X10: Concurrent programming for modern architectures. In: ACM SIGPLAN symposium on principles and practice of parallel programming (PPoPP), San Jose, p 271

    Google Scholar 

  18. Träff JL, Gropp WD, Thakur R (2010) Selfconsistent MPI performance guidelines. IEEE Trans Parallel Distrib Syst 21: 698–709

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer Science+Business Media, LLC

About this entry

Cite this entry

van de Geijn, R., Träff, J. (2011). Collective Communication. In: Padua, D. (eds) Encyclopedia of Parallel Computing. Springer, Boston, MA. https://doi.org/10.1007/978-0-387-09766-4_28

Download citation

Publish with us

Policies and ethics