Skip to main content

Scheduling of a Parallel Computation-Bound Application and Sequential Applications Executing Concurrently on a Cluster – A Case Study

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 3358))

Abstract

Studies have shown that most of the computers in a non-dedicated cluster are often idle or lightly loaded. The underutilized computers in a non-dedicated cluster can be employed to execute parallel applications. The aim of this study is to learn how concurrent execution of a computation-bound and sequential applications influence their execution performance and cluster utilization. The result of the study has demonstrated that a computation-bound parallel application benefits from load balancing, and at the same time sequential applications suffer only an insignificant slowdown of execution. Overall, the utilization of a non-dedicated cluster is improved.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Arpaci, R.H., Dusseau, A.C., Vahdat, A.M., Liu, L.T., Anderson, T.E., Patterson, D.A.: The Interaction of Parallel and Sequential Workloads on a Network of Workstations. In: Proceedings of 1995 ACM Joint International Conference on Measurement and Modeling of Computing Systems, May 1995, pp. 267–278 (1995)

    Google Scholar 

  2. Acharya, A., Edjlali, G., Saltz, J.: The Utility of Exploiting Idle Workstations for Parallel Computation. In: Proceedings of 1997 ACM Sigmetrics International Conference on Measurement and Modeling of Computer Systems, May 1997, pp. 225–236 (1997)

    Google Scholar 

  3. Anglano, C.: A Comparative Evaluation of Implicit Coscheduling Strategies for Networks of Workstations. In: Proceedings of 9th International Symposium on High Performance Distributed Computing (HPDC9), August 2000, pp. 221–228 (2000)

    Google Scholar 

  4. Burns, G., Daoud, R., Vaigl, J.: LAM: An Open Cluster Environment for MPI. In: Proceedings of Supercomputing Symposium, pp. 379-386, University of Toronto (1994)

    Google Scholar 

  5. Becker, W.: Dynamic Balancing Complex Workload in Workstation Networks – Challenge, Concepts and Experience. In: Hertzberger, B., Serazzi, G. (eds.) HPCN-Europe 1995. LNCS, vol. 919, pp. 407–412. Springer, Heidelberg (1995)

    Chapter  Google Scholar 

  6. Barak, A., Guday, S., Wheeler, R.G.: The MOSIX Distributed Operating System, Load Balancing for UNIX. Springer, Heidelberg

    Google Scholar 

  7. BYTE’s UnixBench. The BYTE’s Unix Benchmark Suite, http://www.tux.org/pub/tux/niemi/unixbench

  8. The MPI Forum. MPI: a message passing interface. In: Proceedings of the 1993 Conference on Supercomputing, pp. 878–883 (1993)

    Google Scholar 

  9. Goscinski, A.M.: Distributed Operating Systems, The Logical Design. Addison-Wesley, Sydney (1991)

    Google Scholar 

  10. The openMosix Homepage, http://openmosix.sourceforge.net

  11. The POVRAY Homepage. Persistence of Vision Ray-tracer, http://www.povray.org

  12. Ryu, K.D., Hollingsworth, J.K.: Linger Longer: Fine-Grain Cycle Stealing for Networks of Workstations. In: Proceedings of the 1998 ACM/IEEE Conference on Supercomputing (CDROM), pp. 1–12 (1998)

    Google Scholar 

  13. Tandiary, F., Kothari, S.C., Dixit, A., Anderson, E.W.: Batrun: Utilizing Idle Workstations for Large-scale Computing. IEEE Parallel and Distributed Technology 4(2), 41–48 (1996)

    Article  Google Scholar 

  14. Wong, F.C., Arpaci-Dusseau, A.C., Culler, D.E.: Building MPI for Multi-Programming Systems using Implicit Information. In: Proceedings of the 6th European PVM/MPI User’s Group Meeting, pp. 215–222 (1999)

    Google Scholar 

  15. Verrall, L.: MPI-Povray: Distributed Povray Using MPI Message Passing, http://www.verrall.demon.co.uk/mpipov

  16. Zhou, B.B., Qu, X., Brent, R.P.: Effective Scheduling in a Mixed Parallel and Sequential Computing Environment. In: Proceedings of the 6th Euromicro Workshop of Parallel and Distributed Processing, January 1998, pp. 32–37 (1998)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2004 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Wong, A.K.L., Goscinski, A.M. (2004). Scheduling of a Parallel Computation-Bound Application and Sequential Applications Executing Concurrently on a Cluster – A Case Study. In: Cao, J., Yang, L.T., Guo, M., Lau, F. (eds) Parallel and Distributed Processing and Applications. ISPA 2004. Lecture Notes in Computer Science, vol 3358. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-30566-8_76

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-30566-8_76

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-24128-7

  • Online ISBN: 978-3-540-30566-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics