Abstract
We study two classes of unbounded fan-in parallel computation, the standard one, based on unbounded fan-in ANDs and ORs, and a new class based on unbounded fan-in threshold functions. The latter is motivated by a connectionist model of the brain used in Artificial Intelligence. We are interested in the resources of time and address complexity. Intuitively, the address complexity of a parallel machine is the number of bits needed to describe an individual piece of hardware. We demonstrate that (for WRAMs and uniform unbounded fan-in circuits) parallel time and address complexity is simultaneously equivalent to alternations and time on an alternating Turing machine (the former to within a constant multiple, and the latter a polynomial). In particular, for constant parallel time, the latter equivalence holds to within a constant multiple. Thus, for example, polynomial-processor, constant-time WRAMs recognize exactly the languages in the logarithmic time hierarchy, and polynomial-word-size, constant-time WRAMs recognize exactly the languages in the polynomial time hierarchy. As a corollary, we provide improved simulations of deterministic Turing machines by constant-time shared-memory machines. Furthermore, in the threshold model, the same results hold if we replace the alternating Turing machine with the analogous threshold Turing machine, and replace the resource of alternations with the corresponding resource of thresholds. Threshold parallel computers are much more powerful than the standard models (for example, with only polynomially many processors, they can compute the parity function and sort in constant time, and multiply two integers in O(log*n) time), and appear less amenable to known lower-bound proof techniques.
Research supported by NSF grant DCR-84-07256.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
D. H. Ackley, G. E. Hinton, and T. J. Sejnowski, “A learning algorithm for Boltzmann machines,” Cognitive Science, vol. 9, pp. 147–169, 1985.
M. Ajtai, “Σ 11 -formulae on finite structures,” Annals of Pure and Applied Logic, vol. 24, pp. 1–48, 1983.
M. Ajtai and M. Ben-Or, “A note on probabilistic constant depth computations,” Proc. 16th Ann. ACM Symp. on Theory of Computing, pp. 471–474, Washington, D.C., Apr.-May 1984.
C. G. Bennet and J. Gill, “Relative to a random oracle A, PA ≠ NPA ≠ Co-NPA with probability 1,” SIAM J. Comp., vol. 10, pp. 96–113, 1981.
N. Blum, “A note on the ‘parallel computation thesis',” Inf. Proc. Lett., vol. 17, pp. 203–205, 1983.
A. K. Chandra, D. C. Kozen, and L. J. Stockmeyer, “Alternation,” J. ACM, vol. 28, no. 1, pp. 114–133, Jan. 1981.
S. A. Cook, “Towards a complexity theory of synchronous parallel computation,” L'Enseignement Mathematique, vol. 30, 1980.
P. W. Dymond, “Simultaneous resource bounds and parallel computations,” Ph. D. Thesis, issued as Technical Report TR145/80, Dept. of Computer Science, Univ. of Toronto, Aug. 1980.
P. W. Dymond and S. A. Cook, “Hardware complexity and parallel computation,” Proc. 21st Ann. IEEE Symp. on Foundations of Computer Science, Oct. 1980.
M. Flynn, “Very high-speed computing systems,” Proc. IEEE, vol. 54, pp. 1901–1909, Dec. 1966.
S. Fortune and J. Wyllie, “Parallelism in random access machines,” Proc. 10th Ann. ACM Symp. on Theory of Computing, pp. 114–118, 1978.
M. Furst, J. B. Saxe, and M. Sipser, “Parity, circuits and the polynomial time hierarchy,” Math. Syst. Theory, vol. 17, no. 1, pp. 13–27, 1984.
L. M. Goldschlager, “Synchronous parallel computation,” Ph. D. Thesis, issued as TR-114, Dept. of Computer Science, Univ. of Toronto, Dec. 1977.
L. M. Goldschlager, “A universal interconnection pattern for parallel computers,” J. ACM, vol. 29, no. 4, pp. 1073–1086, Oct. 1982.
L. M. Goldschlager and I. Parberry, “On the construction of parallel computers from various bases of boolean functions,” Theor. Comput. Sci., vol. 41, no. 1, pp. 1–16, 1986.
J. Hartmanis and J. Simon, “On the power of multiplication in random access machines,” Proc. 15th Ann. IEEE Symp. on Switching and Automata Theory, pp. 13–23, 1974.
G. E. Hinton, T. J. Sejnowski, and D. H. Ackley, “Boltzmann machines: Constraint satisfaction networks that learn,” CMU-CS-84-119, Dept. of Computer Science, Carnegie-Mellon Univ., May 1984.
J. J. Hopfield, “Neural networks and physical systems with emergent collective computational abilities,” Proc. National Academy of Sciences, vol. 79, pp. 2554–2558, Apr. 1982.
M. Luby, “A simple parallel algorithm for the maximal independent set problem,” Proc. 17th Ann. ACM Symp. on Theory of Computing, pp. 1–10, Providence, Rhode Island, May 1985.
I. Parberry, “Parallel speedup of sequential machines: a defense of the parallel computation thesis,” Technical Report CS-84-17, Dept. of Computer Science, Penn. State Univ., Oct. 1984.
I. Parberry, “A complexity theory of parallel computation,” Ph. D. Thesis, Dept. of Computer Science, Univ. of Warwick, May 1984.
I. Parberry, “On the number of processors required to simulate Turing machines in constant parallel time,” Technical Report CS-85-17, Dept. of Computer Science, Penn. State Univ., Aug. 1985.
W. J. Paul, N. Pippenger, E. Szemerédi, and W. T. Trotter, “On determinism versus non-determinism and related problems,” Proc. 24th Ann. IEEE Symp. on Foundations of Computer Science, pp. 429–438, Tucson, Arizona, Nov. 1983.
N. Pippenger, “On simultaneous resource bounds,” Proc. 20th Ann. IEEE Symp. on Foundations of Computer Science, Oct. 1979.
V. Pratt and L. J. Stockmeyer, “A characterization of the power of vector machines,” J. Comput. Sys. Sci., vol. 12, pp. 198–221, 1976.
W. L. Ruzzo, “On uniform circuit complexity,” J. Comput. Sys. Sci., vol. 22, no. 3, pp. 365–383, June 1981.
J. E. Savage, “Computational work and time on finite machines,” J. ACM, vol. 19, no. 4, pp. 660–674, 1972.
J. T. Schwartz, “Ultracomputers,” ACM TOPLAS, vol. 2, no. 4, pp. 484–521, Oct. 1980.
Y. Shiloach and U. Vishkin, “Finding the maximum, sorting and merging in a parallel computation model,” J. Algorithms, vol. 2, pp. 88–102, 1981.
M. Sipser, “Borel sets and circuit complexity,” Proc. 15th Ann. ACM Symp. on Theory of Computing, pp. 61–69, Boston, Mass., Apr. 1983.
L. Stockmeyer and U. Vishkin, “Simulation of parallel random access machines by circuits,” SIAM J. Comp., vol. 13, no. 2, pp. 409–422, May 1984.
L. J. Stockmeyer, “The polynomial time hierarchy,” Theor. Comput. Sci., vol. 3, pp. 1–22, 1977.
L. G. Valiant, “The complexity of enumeration and reliability problems,” SIAM J. Comp., vol. 8, no. 3, pp. 410–421, 1979.
L. G. Valiant, “The complexity of computing the permanent,” Theor. Comput. Sci., vol. 8, pp. 189–201, 1979.
C. Wrathall, “Complete sets and the polynomial-time hierarchy,” Theor. Comput. Sci., vol. 3, pp. 23–33, 1976.
A. C. Yao, “Separating the polynomial-time hierarchy by oracles,” Proc. 26th Ann. IEEE Symp. on Foundations of Computer Science, Portland, Oregon, Oct. 1985.
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 1986 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Parberry, I., Schnitger, G. (1986). Parallel computation with threshold functions. In: Selman, A.L. (eds) Structure in Complexity Theory. Lecture Notes in Computer Science, vol 223. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-16486-3_105
Download citation
DOI: https://doi.org/10.1007/3-540-16486-3_105
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-16486-9
Online ISBN: 978-3-540-39825-7
eBook Packages: Springer Book Archive