Elsevier

Parallel Computing

Volume 25, Issues 13–14, December 1999, Pages 1583-1600
Parallel Computing

Early parallelism with a loosely coupled array of processors: The ICAP experiment

https://doi.org/10.1016/S0167-8191(99)00085-XGet rights and content

Abstract

The hardware, system software and Scientific application examples with relative performances are reported for the ICAP-1, ICAP-2, ICAP-3 and ICAP-3090 experimental systems, with emphases on motivation, strategy and accomplishments. These pioneering efforts are considered in the light of future large-scale computational applications, which will require parallel super computing power and also a new outlook to computational models.

Section snippets

The LCAP environment

Initial interest in parallel computing is spurred in general by a number of factors differing situation to situation. For one of us (Clementi) parallel computing was first encountered at the IBM Research laboratory, San Jose, California, in the early sixties, by making an assessment on the main features of an experimental parallel computer commissioned by the USA government to a research centre in Menlo Park, California, and designed particularly for solving fluid dynamics computations. The

LCAP software

As ICAP evolved, there were three significantly different configurations: (1) host to channel attached processors, as shown in Fig. 1, (2) host to channel attached processors interconnected with shared bulk memories and fast-bus, as shown in Fig. 2, and (3) channel coupled-shared memory multiprocessors, as shown in Fig. 3. We have developed two approaches to the way the parallel processor configurations communicate. Configuration (1) was clearly a `master/slave' topology, therefore the system

LCAP applications

The first computational applications on ICAP were in computational chemistry, one of the main interest of the Kingston department, and a most reasonable application choice for the early ICAP, since at the time without shared memory and fast bus. There were two distinct tasks to be accomplished, first to port a code from scalar to parallel measuring also performance and second to use the parallel code in a scientific application. Note that the two tasks were often reported in the same

Conclusions

It appears that there are strong indications pointing out that notable progress in simulations of `complex matter' can be made mainly by exploring more and more the `mixed systems' methodology. The present lack of interdisciplinary knowledge is the main cultural drawback we face. In an attempt to break this mental bondage, we have attempted to collect different modern methods and techniques into a unified computational frame, explained in the MOTECC-91 [73] and the METECC-94 [74] volumes and

References (75)

  • Y. Wallach, Alternating Sequential/Parallel Processing, Lecture Notes in Computer Science, vol. 124, Springer, Berlin,...
  • E. Clementi, R.H. Sarma (Eds.), Structure and Dynamics: Nucleic Acids and Proteins, Adenine Press, New York,...
  • FPS-164 Operating System Manual, 1–3 (Publication No. 860-7491-000B), Floating Point Systems, January...
  • E. Clementi, Supercomputers for chemical research and development, in: J. Brandt, I.K. Ugi (Eds.), Computer...
  • Shared Bulk Memory System Software Manual, ver. 2.0, Scientific Computing Associates, Yale University, New Haven,...
  • FPSBUS software manual, Rel. G, Publication No. 860-7313-004A, Floating Point Systems Inc., Beaverton, Oregon,...
  • R. Gomperts, E. Clementi, Full ab-initio modelling of the interaction of ions with the gramicidin transmembrane...
  • E. Clementi et al.

    Parallel solution of fundamental algorithms using a loosely coupled array of processors

  • E. Clementi, S. Chin, G. Corongiu, J. Detrich, M. Dupuis, L.J. Evans, D. Folsom, D. Frye, G.C. Lie, D. Logan, D. Meck,...
  • J.P. Prost, S. Chin, Parallel processing on the loosely coupled IBM workstations, IBM Research Report KGN-226,...
  • E. Clementi et al.

    Large scale computations on the loosely coupled array of processors

    Israel J. Chem.

    (1986)
  • E. Clementi et al.

    Large Scale Parallel Computations On A Loosely Coupled Array of Processors

    (1986)
  • E. Clementi, J. Detrich, S. Chin, G. Corongiu, D. Folsom, D. Logan, R. Caltabiano, A. Gnudi, A. Carnevali, J. Helin, P....
  • E. Clementi, S. Chin, G. Corongiu, J. Detrich, M. Dupuis, D. Folsom, G.C. Lie, D. Logan, D. Meck, V. Sonnad, R....
  • E. Clementi, S. Chin, Z. Christidis, G. Corongiu, J. Detrich, M. Dupuis, D. Folsom, G.J.B. Hurst, G.C. Lie, D. Logan,...
  • E. Clementi, S. Chin, G. Corongiu, J.H. Detrich, M. Dupuis, D. Folsom, G.C. Lie, D. Logan, V. Sonnad, Supercomputing...
  • E. Clementi et al.

    LCAP/3090 parallel processing for large scale scientific and engineering problems

    IBM Systems Journal

    (1988)
  • E. Clementi et al.

    Solution of large scale engineering problems using a loosely coupled array of processors

  • E. Clementi, D.K. Bhattacharya, S. Chin, G. Corongiu, M. Dupuis, K. Dyall, D. Folsom, S. Foresti, S. Hassanzadeh, G.C....
  • A. Carnevali, L. Domingo, E. Clementi, A precompiler for parallel programs in ICAP environment (LCAPAR): User’s notes,...
  • R. Caltabiano, A. Carnevali, J. Detrich, Directives for the use of shared bulk memories: a precompiler extension, IBM...
  • J. Prost, J. Detrich, M. Becker, Cost characterization of the precompiler communication and synchronization directives...
  • D. Folsom, M. Klonowski, N. Pitsianis, A. Trannoy, S. Veronese, LCAP/3090 user guide, IBM Research Report KGN-193,...
  • G. Corongiu et al.

    Large-scale scientific applications programs on an experimental parallel computer system

    IBM J. Res. Develop.

    (1985)
  • R. Caltabiano, M. Russo, A. Carnevali, J. Detrich, D. Folsom, Parallel computation on the loosely coupled array of...
  • G. Corongiu, J. Detrich, E. Clementi, Study of communication and synchronization overhead on the loosely array of...
  • M. Russo, A. Perez-Ambite, R. Caltabiano, J. Detrich, D. Folsom, An approach to parallel scheduling for the LCAP...
  • View full text