Abstract
The idea behind data parallel programming is to perform global operations over large data structures, where the individual operations on singleton elements of the data structure are performed simultaneously. In the simplest case, for example, this means that a loop over an array is replaced by a constant-time aggregate operation. In order to introduce parallelism, the programmer thinks about the organisation of data structures rather than the organisation of processes. This leads directly to two of the most appealing benefits of data parallelism:
-
The program can be quite explicit about parallelism, through the choice of suitable data structure operations, while at the same time it is structured like an ordinary sequential program. Thus data parallelism allows efficient usage of a parallel machine’s resources, while providing a straightforward programming style that avoids many of the difficulties of task-oriented concurrent programming.
-
The parallelism can be scaled up simply by increasing the data structure size, without needing to reorganise the algorithm. Typical data parallel programs can use far greater numbers of processors than typical task parallel programs.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1999 Springer-Verlag London
About this chapter
Cite this chapter
O’Donnell, J. (1999). Data Parallelism. In: Hammond, K., Michaelson, G. (eds) Research Directions in Parallel Functional Programming. Springer, London. https://doi.org/10.1007/978-1-4471-0841-2_7
Download citation
DOI: https://doi.org/10.1007/978-1-4471-0841-2_7
Publisher Name: Springer, London
Print ISBN: 978-1-85233-092-7
Online ISBN: 978-1-4471-0841-2
eBook Packages: Springer Book Archive