Skip to main content

Parallelization of Image Compression on Distributed Memory Architecture

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1823))

Abstract

In this work we propose two parallel algorithms, for image compression, based on multilayer neural networks, by subdividing the image into blocks. The first parallel technique is based on a static distribution of blocks to processors. The advantage of this distribution is that the training phase (construction of the compressor-decompressor network) does not need any communication but its drawback is the load balancing problem. The second parallel technique improves the load balancing problem by using a dynamic distribution of blocks but it requires communication between processors. These two implementations are tested and compared on a distributed memory machine under PVM.

Supported by the European Program INCO-DC, Project “DAPPI”

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. O. Abdel-Wahhab et M.M. Fahmy, “Image Compression using Multilayer Neural Network” IEEE Proc-Vis. Image Signal Process, Vol. 144. No. 5. October 1997

    Google Scholar 

  2. E. M. Daoudi et E.M. Jaara, “ Parallel Methods of Training for Multilayer Neural Network” Euro-Par’99, Lecture Notes in Computer Science 1685, 1999.

    Google Scholar 

  3. H. Nait Charif, “A Fault Tolerant Learning Algorithm for Feedforward Neural Networks” Conférence FTPD 1996, Hawaï.

    Google Scholar 

  4. H. Paugam-moisy, “Réseaux de Neurones Artificiels: Parallélisme, Apprentissage et Modélisation” Habilitation à Diriger des Recherches, ENS-Lyon, 1997.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2000 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

El Daoudi, M., El Jaâra, M., Cherif, N. (2000). Parallelization of Image Compression on Distributed Memory Architecture. In: Bubak, M., Afsarmanesh, H., Hertzberger, B., Williams, R. (eds) High Performance Computing and Networking. HPCN-Europe 2000. Lecture Notes in Computer Science, vol 1823. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45492-6_67

Download citation

  • DOI: https://doi.org/10.1007/3-540-45492-6_67

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-67553-2

  • Online ISBN: 978-3-540-45492-2

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics