Skip to main content

Dynamic partitioning in different distributed-memory environments

  • Conference paper
  • First Online:
Job Scheduling Strategies for Parallel Processing (JSSPP 1996)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1162))

Included in the following conference series:

Abstract

In this paper we present a detailed analysis of dynamic partitioning in different distributed-memory parallel environments based on experimental and analytical methods. We develop an experimental test-bed for the IBM SP2 and a network of workstations, and we apply a general analytic model of dynamic partitioning. This experimental and analytical framework is then used to explore a number of fundamental performance issues and tradeoffs concerning dynamic partitioning in different distributed-memory computing environments. Our results demonstrate and quantify how the performance benefits of dynamic partitioning are heavily dependent upon several system variables, including workload characteristics, system architecture, and system load.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. G. R. Andrews. Paradigms for process interaction in distributed programs. ACM Computing Surveys, 23(1):49–90, Mar. 1991.

    Article  Google Scholar 

  2. S. Asmussen, O. Nerman, and M. Olsson. Fitting phase type distributions via the EM algorithm. Technical Report 1994:23, Department of Mathematics, Chalmers University of Technology, May 1994.

    Google Scholar 

  3. Carriero, Freeman, Gelernter, and Kaminsky. Adaptive Parallelism in Piranha. IEEE Computer, 28(1):40–49, Jan. 1995.

    Google Scholar 

  4. N. Carriero and D. Gelernter. Linda in Context. Communications of the ACM, 32(4):444–458, Apr. 1989.

    Article  Google Scholar 

  5. R. Chandra, A. Gupta, and J. Henessey. COOL: A Language for parallel programming. In Proceedings of the Second Workshop on Programming Languages, and Compilers for Parallel Computing, Aug. 1989.

    Google Scholar 

  6. C. S. Chang. Smoothing point processes as a means to increase throughput. Technical Report RC 16866, IBM Research Division, May 1991.

    Google Scholar 

  7. S.-H. Chiang, R. K. Mansharamani, and M. K. Vernon. Use of application characteristics and limited preemption for run-to-completion parallel processor scheduling policies. In Proceedings of the ACM SIGMETRICS Conference on Measurement and Modeling of Computer Systems, pages 33–44, May 1994.

    Google Scholar 

  8. E. G. Coffman, Jr. and L. Kleinrock. Computer scheduling methods and their countermeasures. In Proceedings of the AFIPS Spring Joint Computer Conference, volume 32, pages 11–21, April 1968.

    Google Scholar 

  9. K. Dussa, B. Carlson, L. Dowdy, and K.-H. Park. Dynamic partitioning in transputer environments. In Proceedings of the ACM SIGMETRICS Conference on Measurement and Modeling of Computer Systems, pages 203–213, 1990.

    Google Scholar 

  10. J. D. L. Eager and J. Zahorjan. Chores:Enhanced Run-Time support for shared-memory parallel computing. ACM Transactions on Computer Systems, 11(1):1–32, Feb. 1993.

    Article  Google Scholar 

  11. K. Ekanadham, V. K. Naik, and M. S. Squillante. PET: A parallel performance estimation tool. In Proceedings Seventh SIAM Conference on Parallel Processing for Scientific Computing, February 1995.

    Google Scholar 

  12. M. J. Faddy. Fitting structured phase-type distributions. Technical report, Department of Mathematics, University of Queensland, Australia, April 1994. To appear, Applied Stochastic Models and Data Analysis.

    Google Scholar 

  13. D. G. Feitelson and B. Nitzberg. Job characteristics of a production parallel scientific workload on the NASA Ames iPSC/860. In Job Scheduling Strategies for Parallel Processing, D. G. Feitelson and L. Rudolph (eds.). Springer-Verlag, 1995. Lecture Notes in Computer Science Vol. 949.

    Google Scholar 

  14. L. L. Fong, A. S. Gopal, N. Islam, A. Prodromidis, and M. S. Squillante. Extensible resource management for cluster computing. Technical report, IBM Research Division, May 1996.

    Google Scholar 

  15. R. Gabriel. Queue based multiprocessing lisp. In ACM Symposium on Lips and Functional Programming, pages 25–43, 1984.

    Google Scholar 

  16. D. Ghosal, G. Serazzi, and S. K. Tripathi. The processor working set and its use in scheduling multiprocessor systems. IEEE Transactions on Software Engineering, 17:443–453, May 1991.

    Article  Google Scholar 

  17. A. Gupta, A. Tucker, and S. Urushibara. The impact of operating system scheduling policies and synchronization methods on the performance of parallel applications. In Proceedings of the ACM SIGMETRICS Conference on Measurement and Modeling of Computer Systems, May 1991.

    Google Scholar 

  18. S. G. Hotovy. Workload evolution on the Cornell Theory Center IBM SP2. In Job Scheduling Strategies for Parallel Processing, D. G. Feitelson and L. Rudolph (eds.). Springer-Verlag, 1996. Lecture Notes in Computer Science, this volume.

    Google Scholar 

  19. S. G. Hotovy, D. J. Schneider, and T. O'Donnell. Analysis of the early workload on the Cornell Theory Center IBM SP2. In Proceedings of the ACM SIGMETRICS Conference on Measurement and Modeling of Computer Systems, May 1996.

    Google Scholar 

  20. A. Lang. Parameter estimation for phase-type distributions, part I: Fundamentals and existing methods. Technical Report 159, Department of Statistics, Oregon State University, 1994.

    Google Scholar 

  21. S. T. Leutenegger and M. K. Vernon. The performance of multiprogrammed multiprocessor scheduling policies. In Proceedings of the ACM SIGMETRICS Conference on Measurement and Modeling of Computer Systems, pages 226–236, May 1990.

    Google Scholar 

  22. R. K. Mansharamani and M. K. Vernon. Properties of the EQS parallel processor allocation policy. Technical Report 1192, Computer Sciences Department, University of Wisconsin-Madison, November 1993.

    Google Scholar 

  23. C. McCann, R. Vaswani, and J. Zahorjan. A dynamic processor allocation policy for multiprogrammed shared-memory multiprocessors. ACM Transactions on Computer Systems, 11(2):146–178, May 1993.

    Article  Google Scholar 

  24. C. McCann and J. Zahorjan. Processor allocation policies for message-passing parallel computers. In Proceedings of the ACM SIGMETRICS Conference on Measurement and Modeling of Computer Systems, pages 19–32, May 1994.

    Google Scholar 

  25. J. Mohan. Performance of Parallel Programs: Models and Analyses. PhD thesis, Carnegie Mellon University, July 1984.

    Google Scholar 

  26. V. K. Naik, S. K. Setia, and M. S. Squillante. Performance analysis of job scheduling policies in parallel supercomputing environments. In Proceedings of Supercomputing '93, pages 824–833, November 1993.

    Google Scholar 

  27. V. K. Naik, S. K. Setia, and M. S. Squillante. Scheduling of large scientific applications on distributed memory multiprocessor systems. In Proceedings Sixth SIAM Conference on Parallel Processing for Scientific Computing, pages 913–922, March 1993.

    Google Scholar 

  28. M. F. Neuts. Matrix-Geometric Solutions in Stochastic Models: An Algorithmic Approach. The Johns Hopkins University Press, 1981.

    Google Scholar 

  29. J. D. Padhye and L. W. Dowdy. Preemptive versus non-preemptive processor allocation policies for message passing parallel computers: An empirical comparison. In Job Scheduling Strategies for Parallel Processing, D. G. Feitelson and L. Rudolph (eds.). Springer-Verlag, 1996. Lecture Notes in Computer Science, this volume.

    Google Scholar 

  30. E. Rosti, E. Smirni, L. W. Dowdy, G. Serazzi, and B. M. Carlson. Robust partitioning policies of multiprocessor systems. Performance Evaluation, 19:141–165, 1994.

    Article  Google Scholar 

  31. S. K. Setia. Scheduling on Multiprogrammed, Distributed Memory Parallel Computers. PhD thesis, Department of Computer Science, University of Maryland, College Park, MD, August 1993.

    Google Scholar 

  32. S. K. Setia and S. K. Tripathi. A comparative analysis of static processor partitioning policies for parallel computers. In Proceedings of MASCOTS '93, January 1993.

    Google Scholar 

  33. K. C. Sevcik. Characterizations of parallelism in applications and their use in scheduling. In Proceedings of the ACM SIGMETRICS Conference on Measurement and Modeling of Computer Systems, pages 171–180, May 1989.

    Google Scholar 

  34. K. C. Sevcik. Application scheduling and processor allocation in multiprogrammed parallel processing systems. Performance Evaluation, 19:107–140, 1994.

    Article  Google Scholar 

  35. M. S. Squillante. Analysis of dynamic partitioning in parallel systems. Technical Report RC 19950, IBM Research Division, February 1995.

    Google Scholar 

  36. M. S. Squillante. On the benefits and limitations of dynamic partitioning in parallel computer systems. In Job Scheduling Strategies for Parallel Processing, D. G. Feitelson and L. Rudolph (eds.). Springer-Verlag, 1995. Lecture Notes in Computer Science Vol. 949.

    Google Scholar 

  37. A. Tucker and A. Gupta. Process control and scheduling issues for multiprogrammed shared-memory multiprocessors. In Proceedings of the Twelfth ACM Symposium on Operating Systems Principles, pages 159–166, December 1989.

    Google Scholar 

  38. J. Zahorjan and C. McCann. Processor scheduling in shared memory multiprocessors. In Proceedings of the ACM SIGMETRICS Conference on Measurement and Modeling of Computer Systems, pages 214–225, May 1990.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Dror G. Feitelson Larry Rudolph

Rights and permissions

Reprints and permissions

Copyright information

© 1996 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Islam, N., Prodromidis, A., Squillante, M.S. (1996). Dynamic partitioning in different distributed-memory environments. In: Feitelson, D.G., Rudolph, L. (eds) Job Scheduling Strategies for Parallel Processing. JSSPP 1996. Lecture Notes in Computer Science, vol 1162. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0022297

Download citation

  • DOI: https://doi.org/10.1007/BFb0022297

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-61864-5

  • Online ISBN: 978-3-540-70710-3

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics