Abstract
The contributions of this paper are twofold. First, we outline criteria by which any model of asynchronous shared memory parallel computation can be judged. Previous models are considered with respect to these factors. Next, we introduce a new model, and show that this model fulfils all the listed requirements. We also analyze in our model the complexity of several fundamental parallel algorithms.
- [AKS] M. Ajtai, J. Komlós, and E. Szemerédi, "An O(n log n) Sorting Network," Proc. 15th ACM STOC, pp. 1-9, 1983. Google ScholarDigital Library
- [A1] Richard J. Anderson, "On the Implementation and Performance of Algorithms for Small Shared Memory Machines," manuscript, 1989.Google Scholar
- [A2] Richard J. Anderson, "Parallel Algorithms for Generating Random Permutations on a Shared Memory Machine," Proc. 2nd ACM SPAA, July 1990. Google ScholarDigital Library
- [CZ1] Richard Cole and Ofer Zajicek, "The APRAM: Incorporating Asynchrony into the PRAM Model," Proc. 1st ACM SPAA, June 1989. Google ScholarDigital Library
- [CZ2] Richard Cole and Ofer Zajicek, "The APRAM A Model for Asynchronous Parallel Computation," manuscript, February 1990.Google Scholar
- [CZ3] Richard Cole and Ofer Zajicek, "The Expected Advantage of Asynchrony," Proc. 2nd ACM SPAA, July 1990. Google ScholarDigital Library
- [DR] Patrick Dymond and Walter L. Ruzzo, "Parallel RAMs with Owned Global Memory and Deterministic Context-Free Language Recognition," Proc. 13th ICALP, pp. 95-104, 1986. Google ScholarDigital Library
- [F] William Feller, An Introduction to Probability Theory and its Applications, John Wiley and Sons, 1957.Google Scholar
- [G] Phillip B. Gibbons, "A More Practical PRAM Model," Proc. 1st ACM SPAA, June 1989. Google ScholarDigital Library
- [KS] Paris C. Kanellakis and Alex A. Shvartsman, "Efficient Parallel Algorithms Can Be Made Robust," Technical Report No. CS-89- 35, Brown University, October 1989. Google ScholarDigital Library
- [KPS] Zvi M. Kedem, Krishna V. Palem, and Paul Spirakis, "Efficient Robust Parallel Computations," Proc. 22nd ACM STOC, May 1990. Google ScholarDigital Library
- [KRS] Clyde P. Kruskal, Larry Rudolph, and Marc Snir, "A Complexity Theory of Efficient Parallel Algorithms," IBM Research Report RC13572 (#60702), March 1988.Google Scholar
- [L1] Leslie Lamport, "Concurrent Reading and Writing," Communications of the ACM 20, 11, pp. 806-811, November 1977. Google ScholarDigital Library
- [L2] Leslie Lamport, "On Interprocess Communication," Distributed Computing 1, 1986.Google Scholar
- [L] Michael Luby, "On the Parallel Complexity of Symmetric Connection Networks," Technical Report 214/88, University of Toronto, August 1988.Google Scholar
- [MPS] Charles Martel, Arvin Park, and Ramesh Subramonian, "Optimal Asynchronous Algorithms for Shared Memory Parallel Computers," Report CSE-89-8, Division of Computer Science, University of California, Davis, July 1989.Google Scholar
- [MS] Charles Martel and Ramesh Subramonian, "Asynchronous PRAM Algorithms for List Ranking and Transitive Closure," manuscript, January 1990.Google Scholar
- [MR] Gary L. Miller and John H. Reif, "Parallel Tree Contraction and its Applications," Proc. 26th IEEE FOCS, pp. 478-489, 1985.Google Scholar
- [N] Noam Nisan, "CREW PRAMs and Decision Trees," Proc. 21st ACM STOC, pp. 327-335, May 1989. Google ScholarDigital Library
- [R] Prabhakar Raghavan, "Probabilistic Construction of Deterministic Algorithms: Approximating Packing Integer Programs," JCSS 37, pp. 130-143, 1988. Google ScholarDigital Library
- [TV] Robert E. Tarjan and Uzi Vishkin, "Finding Biconnected Components and Computing Tree Functions in Logarithmic Parallel Time," SIAM Journal of Computing 14, pp. 862-874, 1985.Google ScholarCross Ref
Index Terms
- Asynchronous shared memory parallel computation (preliminary version)
Recommendations
A Model for Asynchronous Shared Memory Parallel Computation
Traditional theoretical shared memory parallel models have been based on a number of assumptions which simultaneously simplify solutions to problems and distance the models from actual parallel machines. One such assumption is that processors work ...
The NYU Ultracomputer Designing an MIMD Shared Memory Parallel Computer
We present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate ...
Comments