Invited paper
The DARPA image understanding benchmark for parallel computers

https://doi.org/10.1016/0743-7315(91)90067-JGet rights and content

Abstract

This paper describes a new effort to evaluate parallel architectures applied to knowledge-based machine vision. Previous vision benchmarks have considered only execution times for isolated vision-related tasks, or a very simple image processing scenario. However, the performance of an image interpretation system depends upon a wide range of operations on different levels of representations, from processing arrays of pixels, through manipulation of extracted image events, to symbolic processing of stored models. Vision is also characterized by both bottom-up (image-based) and top-down (model-directed) processing. Thus, the costs of interactions between tasks, input and output, and system overhead must be taken into consideration. Therefore, this new benchmark addresses the issue of system performance on an integrated set of tasks. The Integrated Image Understanding Benchmark consists of a model-based object recognition problem, given two sources of sensory input, intensity and range data, and a database of candidate models. The models consist of configurations of rectangular surfaces, floating in space, viewed under orthographic projection, with the presence of both noise and spurious nonmodel surfaces. A partially ordered sequence of operations that solves the problem is specified along with a recommended algorithmic method for each step. In addition to reporting the total time and the final solution, timings are requested for each component operation, and intermediate results are output as a check on accuracy. Other factors such as programming time, language, code size, and machine configurations are reported. As a result, the benchmark can be used to gain insight into processor strengths and weaknesses and may thus help to guide the development of the next generation of parallel vision architectures. In addition to discussing the development and specification of the new benchmark, this paper presents the results from running the benchmark on the Connection Machine, Warp, Image Understanding Architecture, Associative String Processor, Alliant FX-80, and Sequent Symmetry. The results are discussed and compared through a measurement of relative effort, which factors out the effects of differing technologies.

References (15)

  • R.J. Carpenter

    Performance measurement instrumentation for multiprocessor computers

  • A.N. Choudhary

    Parallel architectures and parallel algorithms for integrated vision systems

  • M.J.B. Duff

    How not to benchmark image processors

  • D.W. Hillis

    The Connection Machine

    (1986)
  • H.T. Kung et al.

    Warp: A programmable systolic array processor

  • R.M. Lea

    ASP: A cost-effective parallel microcomputer

    IEEE Micro.

    (Oct. 1988)
  • K. Preston

    Benchmark results: The Abingdon Cross

There are more references available in the full text version of this article.

Cited by (52)

  • Array control for high-performance SIMD systems

    2004, Journal of Parallel and Distributed Computing
  • Parallel image processing with one-dimensional DSP arrays

    2000, Future Generation Computer Systems
  • A system for evaluating performance and cost of SIMD array designs

    2000, Journal of Parallel and Distributed Computing
View all citing articles on Scopus

This work was supported in part by the Defense Advanced Research Projects Agency under Contract DACA76-86-C-0015, monitored by the U.S. Army Engineer Topographic Laboratories.

View full text