Skip to main content

Demand-based coscheduling of parallel jobs on multiprogrammed multiprocessors

  • Conference paper
  • First Online:
Job Scheduling Strategies for Parallel Processing (JSSPP 1995)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 949))

Included in the following conference series:

Abstract

We present demand-based coscheduling, a new approach to scheduling parallel computations on multiprogrammed multiprocessors. In demand-based coscheduling, rather than making the pessimistic assumption that all the processes constituting a parallel job must be simultaneously scheduled in order to achieve good performance, we use information about which processes are communicating in order to coschedule only these; the result is more opportunities for coscheduling and fewer preemptions than in more traditional coscheduling schemes. We introduce two particular types of demand-based coscheduling. The first is dynamic coscheduling, which was conceived for use on message-passing architectures. We present an analytical model and a simulation of dynamic coscheduling that show that the algorithm can achieve good performance under pessimistic assumptions. The second is predictive coscheduling, for which we present an algorithm that detects communication by using virtual memory system information on a bus-based shared-memory multiprocessor.

Support for Patrick Sobalvarro's work was provided in part under a Draper Fellowship from the Charles Stark Draper Laboratory, Inc. This work was supported in part by the Advanced Research Projects Agency under Contract N00014-91-J-1698, by grants from IBM and AT&T, and by an equipment grant from DEC.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Chaiken, D., Kubiatowicz, J., and Agarwal, A. “LimitLESS Directories: A Scalable Coherence Scheme,” in Fourth International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS IV), April 1991, pp. 224–234.

    Google Scholar 

  2. Chandra, R., et al. “Scheduling and Page Migration for Multiprocessor Compute Servers,” in Sixth International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS VI), San Jose, California, October, 1994, pp. 12–24.

    Google Scholar 

  3. Crovella, M., et al. “Multiprogramming on Multiprocessors,” in Third IEEE Symposium on Parallel and Distributed Processing, 1991, pp. 590–597.

    Google Scholar 

  4. Feitelson, D. G., and Rudolph, L. “Coscheduling Based on Run-Time Identification of Activity Working Sets,” in International Journal of Parallel Programming, Vol. 23, No. 2, pp. 135–160, April, 1995.

    Google Scholar 

  5. Feitelson, D. G., and Rudolph, L. “Distributed Hierarchical Control for Parallel Processing,” in IEEE Computer, Vol. 25, No. 3, pp. 65–77, May, 1990.

    Google Scholar 

  6. Feitelson, D. G., and Rudolph, L. “Gang Scheduling Performance Benefits for Fine-Grain Synchronization,” in Journal of Parallel and Distributed Computing, Vol. 16, No. 4, pp. 306–318, December, 1992.

    Google Scholar 

  7. Gupta, A., Tucker, A., and Urushibara, S. “The Impact of Operating System Scheduling Policies and Synchronization Methods on the Performance of Parallel Applications,” in Proceedings of SIGMETRICS Conference on Measurement and Modeling of Computer Systems, May, 1991, pp. 120–132.

    Google Scholar 

  8. Helmbold, D. P., and McDowell, C. E. “Modeling Speedup(n) Greater than n,” in IEEE Transactions on Parallel and Distributed Systems, Vol. 1, No. 2, April 1990, pp. 250–256.

    Google Scholar 

  9. Kuskin, J. et al. “The Stanford FLASH Multiprocessor,” in Proceedings of the 21st Annual Symposium on Computer Architecture, Chicago, Illinois, April, 1994.

    Google Scholar 

  10. Ousterhout, John K. “Scheduling Techniques for Concurrent Systems,” in Third International Conference on Distributed Computing Systems, October, 1982, pp. 22–30.

    Google Scholar 

  11. Reinhardt, S. K., Larus, J. R., and Wood, D. A. “Tempest and Typhoon: Userlevel Shared Memory,” in Proceedings of the 21st Annual Symposium on Computer Architecture, Chicago, Illinois, April, 1994.

    Google Scholar 

  12. Sobalvarro, P. G. “Adaptive Gang-Scheduling for Distributed-Memory Multiprocessors,” in Proceedings of the 1994 MIT Student Workshop on Scalable Computing, MIT Laboratory for Computer Science Technical Report No. 622, July, 1994.

    Google Scholar 

  13. Tucker, A. Efficient Scheduling on Multiprogrammed Shared-Memory Multiprocessors. Stanford University Department of Computer Science Technical Report CSL-TR-94-601, November, 1993.

    Google Scholar 

  14. Tucker, A. and Gupta, A. “Process Control and Scheduling Issues for Multiprogrammed Shared-Memory Multiprocessors,” in Proceedings of the 12th ACM Symposium on Operating Systems Principles, 1989, pp. 159–186.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Dror G. Feitelson Larry Rudolph

Rights and permissions

Reprints and permissions

Copyright information

© 1995 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Sobalvarro, P.G., Weihl, W.E. (1995). Demand-based coscheduling of parallel jobs on multiprogrammed multiprocessors. In: Feitelson, D.G., Rudolph, L. (eds) Job Scheduling Strategies for Parallel Processing. JSSPP 1995. Lecture Notes in Computer Science, vol 949. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-60153-8_25

Download citation

  • DOI: https://doi.org/10.1007/3-540-60153-8_25

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-60153-1

  • Online ISBN: 978-3-540-49459-1

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics