Skip to main content

A Data and Task Parallel Image Processing Environment

  • Conference paper
  • First Online:
Recent Advances in Parallel Virtual Machine and Message Passing Interface (EuroPVM/MPI 2001)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 2131))

  • 525 Accesses

Abstract

The paper presents a data and task parallel environment for parallelizing low-level image processing applications on distributed memory systems. Image processing operators are parallelized by data decomposition using algorithmic skeletons. At the application level we use task decomposition, based on the Image Application Task Graph. In this way, an image processing application can be parallelized both by data and task decomposition, and thus beter speed-ups can be obtained. The framework is implemented using C and MPI-Panda library and it can be easily ported to other distributed memory systems.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. I. Pitas: Parallel Algorithms for Digital Image Processing, Computer Vision and Neural Networks, John Wiley&Sons, 1993.

    Google Scholar 

  2. M. Cole: “Algorithmic skeletons: structured management of parallel computations”, Pitman/ MIT Press, 1989.

    Google Scholar 

  3. M. Okutomi, and T. Kanade: A multiple-baseline stereo, in IEEE Transactions on Pattern Analysis and Machine Intelligence, 15(4):353–363, 1993.

    Article  Google Scholar 

  4. J. Webb and al.: The CMU Task Parallel Program Suite, Technical Report Carnegie Mellon University, CMU-CS-94-131, 1994.

    Google Scholar 

  5. S. Ramaswamy, S. Sapatnekar and P. Banerjee: A framework for exploiting task and data parallelism on distributed memory multicomputers, in IEEE transactions on parallel and distributed systems, vol. 8, no. 11, November 1997.

    Google Scholar 

  6. T. Rauber, and G. Runger: Compiler support for task scheduling in hierarchical execution models, in Journal of Systems Architecture, vol. 45:483–503, 1998.

    Article  Google Scholar 

  7. J. Subhlok, and B. Yang: A new model for integrated nested task and data parallel programming, in Proceedings of the Symposium on Parallel Algorithms and Architectures, 1992.

    Google Scholar 

  8. T.I. Foster, and K.M. Chandy: Fortran M: A language for modular parallel programming, in Journal of Parallel and Distributed Computing, 26:24–35, 1995.

    Article  MATH  Google Scholar 

  9. S.B. Hassen, H.E. Bal, and C.J. Jacobs: A task and data parallel programming language based on shared objects, in ACM Transactions on Programming Languages and Systems, 20(6):1131–1170, 1998.

    Article  Google Scholar 

  10. R.L. Graham: Bounds on multiprocessing timing anomalies, in SIAM Journal on Applied Mathematics, 17(2):416–429, 1969.

    Article  MATH  MathSciNet  Google Scholar 

  11. A. Radulescu, C. Nicolescu, A. van Gemund and P.P. Jonker: CPR: Mixed Task and Data Parallel Scheduling for Distributed Systems, in CDROM Proceedings of The 15th International Parallel & Distributed Symposium (IPDPS’2001), Best Paper Award, 2001.

    Google Scholar 

  12. The Distributed ASCI supercomputer (DAS) site, http://www.cs.vu.nl/das.

  13. P.E. Gill, W. Murray, and M.A. Sanders: User’s guide for snopt 5.3: A fortran package for large-scale nonlinear programming, Technical Report SOL-98-1, Stanford University, 1997.

    Google Scholar 

  14. M. Snir, S. Otto, S. Huss, D. Walker and J. Dongarra: “MPI-The Complete Reference, vol.1, The MPI Core”, The MIT Press, 1998.

    Google Scholar 

  15. T. Ruhl, H, Bal, R. Bhoedjang, K. Langendoen and G. Benson: Experience with a portability layer for implementing parallel programming systems, Proceedings of International Conference on Parallel and Distributed Processing Techniques and Applications, pp. 1477–1488, Sunnyvale CA, 1996

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2001 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Nicolescu, C., Jonker, P. (2001). A Data and Task Parallel Image Processing Environment. In: Cotronis, Y., Dongarra, J. (eds) Recent Advances in Parallel Virtual Machine and Message Passing Interface. EuroPVM/MPI 2001. Lecture Notes in Computer Science, vol 2131. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45417-9_53

Download citation

  • DOI: https://doi.org/10.1007/3-540-45417-9_53

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-42609-7

  • Online ISBN: 978-3-540-45417-5

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics