Skip to main content
Log in

Safe self-scheduling: A parallel loop scheduling scheme for shared-memory multiprocessors

  • Published:
International Journal of Parallel Programming Aims and scope Submit manuscript

Abstract

In this papaer was present Safe Self-Scheduling (SSS), a new scheduling scheme that schedules parallel loops with variable length iteration execution times not known at compile time. The scheme assumes a shared memory space. SSS combines static scheduling with dynamic scheduling and draws favorable advantages from each. First, it reduces the dynamic scheduling overhead by statically scheduling a major portion of loop iterations. Second, the workload is balanced with a simple and efficient self-scheduling scheme by applying a new measure, thesmallest critical chore size. Experimental results comparing SSS with other scheduling schemes indicate that SSS surpasses other scheduling schemes. In the experiment on Gauss-Jordan, an application that is suitable for static scheduling schemes, SSS is the only self-scheduling scheme that outperforms the static scheduling scheme. This indicates that SSS achieves a balanced workload with a very small amount of overhead.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Z. Fang, P. Tang, P. Yew, and C. Zhu, Dynamic Processor Self-Scheduling for General Parallel Nested Loops,IEEE Trans. on Computers 39(7):919–929 (1990).

    Article  Google Scholar 

  2. R. L. Graham, Bounds on Multiprocessor Scheduling Anomalies and Related Packing Algorithms,Proc. of Spring Joint Computer Conf. (1972).

  3. S. F. Hummel, E. Schonberg, and E. L. Flynn, Factoring: A Method for Scheduling Parallel Loops,Comm. of the ACM 35(8):90–101 (1992).

    Article  Google Scholar 

  4. K. Kimura and N. Ichuyoshi, Probabilistic Analysis of the Optimal Efficiency of the Multi Level Dynamic Load Balancing Scheme,Proc. of the Sixth Distributed Memory Comput. Conf., pp. 145–152 (1991).

  5. C. Polychronopoulos and D. J. Kuck, Guided Self-Scheduling: A Practical Scheduling Scheme for Parallel Supercomputers,IEEE Trans. on Computers 36(12):1425–1439 (1987).

    Article  Google Scholar 

  6. V. A. Saletore, A. Distributed and Adaptive Dynamic Load Balancing Scheme for Parallel Processing of Medium-Grain Tasks,Proc. of the Fifth Distributed Memory Comput. Conf., pp. 994–999 (1990).

  7. P. Tang, P. Yew, and C. Zhu, Compiler Techniques for Data Synchronization in Nested Parallel Loops,Proc. of Int'l. Supercomputing Conf., pp. 177–186 (1990).

  8. T. H. Tzen and L. M. Ni, Trapezoid Self-Scheduling: A Practical Scheduling Scheme for Parallel Compilers,IEEE Trans. on Parallel and Distrib. Syst. 4(1):87–98 (1993).

    Article  Google Scholar 

  9. J. Xu and K. Hwang, Heuristic Methods for Dynamic Load Balancing in. a Message-Passing Supercomputer,Proc. of Supercomputing, pp. 888–897 (1990).

  10. E. G. Coffman,Computer and Job-Shop Scheduling Theory, John Wiley and Sons, New York (1976).

    MATH  Google Scholar 

  11. T. G. Lewis and H. El-Rewini,Introduction to Parallel Computing, Prentice-Hall, New York (1992).

    MATH  Google Scholar 

  12. M. Wolfe, Loop Rotation, Language and Compilers for Parallel Computing,Research Monographs in Parallel and Distributed Computing, MIT Press (1990).

  13. W. Stallings,Computer Organization and Architecture, Macmillan, New York (1990).

    Google Scholar 

  14. E. L. Lust and R. A. Overbeek, Implementation of Monitors with macros: A programming aid for the HEP and other parallel processors, Argonne National Laboratory, ANL-83-97, Argonne, IL (1983).

    Google Scholar 

  15. M. J. Quinn,Designing Efficient Algorithms for Parallel Computers, McGraw-Hill, New York (1987).

    MATH  Google Scholar 

  16. L. M. Ni and C. E. Wu, Design Tradeoffs for Process Scheduling in Shared Memory Multiprocessor Systems,IEEE Trans. on Software Engineering 15(3):327–334 (1989).

    Article  Google Scholar 

  17. E. P. Markatos and T. J. LeBlance, Using Processor Affinity in Loop Scheduling on Shared-Memory, The University of Rochester, Computer Science Department, TR 410 (1992).

  18. A. Gottlieb, R. Grishman, C. P. Kruskal, K. P. McAuliffe, R. Rudolph, and M. Snir, The NYU ultracomputer-Designing an MIMD shared-memory parallel computer,IEEE Trans. on Computers C-32(2):175–189 (1983).

    Google Scholar 

  19. C. P. Kruskal and A. Weiss, Allocating Independent Subtasks on Parallel Processors,IEEE Trans. Software Engineering SE-11(10):1001–1016 (1985).

    Google Scholar 

  20. J. Liu and V. A. Saletore, Self-Scheduling on Distributed-Memory Machines,Proc. of Supercomputing, Portland, Oregon, pp. 814–823 (1993).

  21. V. A. Saletore, J. Liu, and B. Y. Lam, Scheduling Non-uniform Parallel Loops on Distributed Memory Machines,Proc. of Hawaii Int'l. Conf. on Syst. Sci. 2:516–525 (1993).

    Article  Google Scholar 

  22. K. Kant,Introduction to Computer System Performance Evaluation, McGraw-Hill, New York (1992).

    Google Scholar 

  23. J. Liu, J. C. Marsaglia, B. Broeg, and V. A. Saletore, Scheduling Parallel Loops Under Faulty Processors,Proc. of Sixth Int'l. Conf. on Parallel and Distrib. Comput. Syst., pp. 387–392 (1993).

  24. J. Liu, V. A. Saletore, and T. G. Lewis, Scheduling Parallel Loops with Variable Length Iteration Execution Times on Parallel Computers,Proc. of ISMM 5th Int'l. Confer. on Parallel and Distrib. Comput. and Syst., pp. 83–89 (1992).

Download references

Author information

Authors and Affiliations

Authors

Additional information

This research has been supported in part by the National Science Foundation under Contract No. CCR-9210568.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Liu, J., Saletore, V.A. & Lewis, T.G. Safe self-scheduling: A parallel loop scheduling scheme for shared-memory multiprocessors. Int J Parallel Prog 22, 589–616 (1994). https://doi.org/10.1007/BF02577870

Download citation

  • Received:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF02577870

Key words

Navigation