Skip to main content

Scheduling in Parallel and Distributed Computing Systems

  • Chapter
  • First Online:
Topics in Parallel and Distributed Computing

Abstract

Recent advancements in computing technology have increased the complexity of computational systems and their ability to solve larger and more complex scientific problems. Scientific applications express solutions to complex scientific problems, which often are data-parallel and contain large loops. The execution of such applications in parallel and distributed computing (PDC) environments is computationally intensive and exhibits an irregular behavior, in general due to variations of algorithmic and systemic nature. A parallel and distributed system has a set of defined policies for the use of its computational resources. Distribution of input data onto the PDC resources is dependent on these defined policies. To reduce the overall performance degradation, mapping applications tasks onto PDC resources requires parallelism detection in the application, partitioning of the problem into tasks, distribution of tasks onto parallel and distributed processing resources, and scheduling the task execution on the allocated resources. Most scheduling policies include provisions for minimizing communication among application tasks, minimizing load imbalance, and maximizing fault tolerance. Often these techniques minimize idle time, overloading resources with jobs and control overheads. Over the years, a number of scheduling techniques have been developed and exploited to address the challenges in parallel and distributed computing. In addition, these scheduling algorithms have been classified based on a taxonomy for an understanding and comparison of the different schemes. These techniques have broadly been classified into static and dynamic techniques. The static techniques are helpful in minimizing the individual task’s response time and do not have an overhead for information gathering. However, they require prior knowledge of the system and they cannot address unpredictable changes during runtime. On the other hand, the dynamic techniques have been developed to address unpredictable changes, and maximize resource utilization at the cost of information gathering overhead. Furthermore, the scheduling algorithms have also been characterized as optimal or sub-optimal, cooperative or non-cooperative, and approximate or heuristic. This chapter provides content on scheduling in parallel and distributed computing, and a taxonomy of existing (early and recent) scheduling methodologies.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 44.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 59.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 59.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. H. El-Rewini, T. G. Lewis, and H. H. Ali, Task Scheduling in Parallel and Distributed Systems. Upper Saddle River, NJ, USA: Prentice-Hall, Inc., 1994.

    Google Scholar 

  2. N. Immerman, “Expressibility and parallel complexity,” SIAM Journal on Computing, vol. 18, no. 3, pp. 625–638, 1989.

    Article  MathSciNet  Google Scholar 

  3. J. C. Wyllie, “The complexity of parallel computations,” Cornell University, Tech. Rep., 1979.

    Google Scholar 

  4. E. G. Coffman and J. L. Bruno, Computer and job-shop scheduling theory. John Wiley & Sons, 1976.

    Google Scholar 

  5. O. H. Ibarra and C. E. Kim, “Heuristic algorithms for scheduling independent tasks on nonidentical processors,” J. ACM, vol. 24, no. 2, pp. 280–289, Apr. 1977. [Online]. Available: http://doi.acm.org/10.1145/322003.322011

  6. D. Fernandez-Baca, “Allocating modules to processors in a distributed system,” IEEE Transactions on Software Engineering, vol. 15, no. 11, pp. 1427–1436, Nov 1989.

    Article  Google Scholar 

  7. R. Gupta and M. L. Soffa, “Region scheduling: An approach for detecting and redistributing parallelism,” Software Engineering, IEEE Transactions on, vol. 16, no. 4, pp. 421–431, 1990.

    Article  Google Scholar 

  8. J. A. Fisher, “Trace scheduling: A technique for global microcode compaction,” IEEE Transactions on Computers, vol. C-30, no. 7, pp. 478–490, July 1981.

    Article  Google Scholar 

  9. M. D. McCool, A. D. Robison, and J. Reinders, Structured parallel programming: patterns for efficient computation. Elsevier, 2012.

    Google Scholar 

  10. H. Hoffmann, A. Agarwal, and S. Devadas, “Partitioning strategies for concurrent programming,” MIT Open Access Articles, 2009.

    Google Scholar 

  11. V. Sarkar, Partitioning and Scheduling Parallel Programs for Multiprocessors. Cambridge, MA, USA: MIT Press, 1989.

    MATH  Google Scholar 

  12. T. L. Casavant and J. G. Kuhl, “A taxonomy of scheduling in general-purpose distributed computing systems,” IEEE Transactions on Software Engineering, vol. 14, no. 2, pp. 141–154, Feb 1988.

    Article  Google Scholar 

  13. R. Mehrotra, S. Srivastava, I. Banicescu, and S. Abdelwahed, “Towards an autonomic performance management approach for a cloud broker environment using a decomposition-coordination based methodology,” Future Generation Comp. Syst., vol. 54, pp. 195–205, 2016. [Online]. Available: http://dx.doi.org/10.1016/j.future.2015.03.020

  14. A. Silberschatz, P. B. Galvin, and G. Gagne, Operating System Concepts, 9th ed. Wiley Publishing, 2009.

    Google Scholar 

  15. S. Srivastava, I. Banicescu, F. M. Ciorba, and W. E. Nagel, “Enhancing the functionality of a gridsim-based scheduler for effective use with large-scale scientific applications,” in 2011 10th International Symposium on Parallel and Distributed Computing, July 2011, pp. 86–93.

    Google Scholar 

  16. A. R. Hurson, J. T. Lim, K. M. Kavi, and B. Lee, “Parallelization of doall and doacross loops - a survey,” Advances in computers, vol. 45, pp. 53–103, 1997.

    Article  Google Scholar 

  17. I. Banicescu and V. Velusamy, “Load balancing highly irregular computations with the adaptive factoring,” in Parallel and Distributed Processing Symposium., Proceedings International, IPDPS 2002, Abstracts and CD-ROM, April 2002, pp. 12 pp–.

    Google Scholar 

  18. I. Banicescu, V. Velusamy, and J. Devaprasad, “On the scalability of dynamic scheduling scientific applications with adaptive weighted factoring,” Cluster Computing, vol. 6, no. 3, pp. 215–226, 2003.

    Article  Google Scholar 

  19. R. Cariño, I. Banicescu, T. Rauber, and G. Rünger, “Dynamic loop scheduling with processor groups.” in ISCA PDCS, 2004, pp. 78–84.

    Google Scholar 

  20. J. D. Ullman, “Np-complete scheduling problems,” J. Comput. Syst. Sci., vol. 10, no. 3, pp. 384–393, Jun. 1975. [Online]. Available: http://dx.doi.org/10.1016/S0022-0000(75)80008-0

  21. R.-S. Chang, J.-S. Chang, and P.-S. Lin, “An ant algorithm for balanced job scheduling in grids,” Future Generation Computer Systems, vol. 25, no. 1, pp. 20–27, 2009. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S0167739X08000848

  22. T. Helmy and Z. Rasheed, “Independent job scheduling by fuzzy c-mean clustering and an ant optimization algorithm in a computation grid.” IAENG International Journal of Computer Science, vol. 37, no. 2, 2010.

    Google Scholar 

  23. S. Selvarani and G. S. Sadhasivam, “Improved job-grouping based pso algorithm for task scheduling in grid computing,” International Journal of Engineering Science and Technology, vol. 2, no. 9, pp. 4687–4695, 2010.

    Google Scholar 

  24. M. Yusof, K. Badak, and M. Stapa, “Achieving of tabu search algorithm for scheduling technique in grid computing using gridsim simulation tool: multiple jobs on limited resource,” Int J Grid Distributed Comput, vol. 3, no. 4, pp. 19–32, 2010.

    Google Scholar 

  25. R. Buyya, C. S. Yeo, S. Venugopal, J. Broberg, and I. Brandic, “Cloud computing and emerging it platforms: Vision, hype, and reality for delivering computing as the 5th utility,” Future Generation computer systems, vol. 25, no. 6, pp. 599–616, 2009.

    Article  Google Scholar 

  26. Z.-H. Zhan, X.-F. Liu, Y.-J. Gong, J. Zhang, H. S.-H. Chung, and Y. Li, “Cloud computing resource scheduling and a survey of its evolutionary approaches,” ACM Computing Surveys (CSUR), vol. 47, no. 4, p. 63, 2015.

    Article  Google Scholar 

  27. B. Speitkamp and M. Bichler, “A mathematical programming approach for server consolidation problems in virtualized data centers,” IEEE Transactions on Services Computing, vol. 3, no. 4, pp. 266–278, Oct 2010.

    Article  Google Scholar 

  28. H. N. Van, F. D. Tran, and J. M. Menaud, “Performance and power management for cloud infrastructures,” in 2010 IEEE 3rd International Conference on Cloud Computing, July 2010, pp. 329–336.

    Google Scholar 

  29. T. A. L. Genez, L. F. Bittencourt, and E. R. M. Madeira, “Workflow scheduling for saas / paas cloud providers considering two sla levels,” in 2012 IEEE Network Operations and Management Symposium, April 2012, pp. 906–912.

    Google Scholar 

  30. V. Roberge, M. Tarbouchi, and G. Labonte, “Comparison of parallel genetic algorithm and particle swarm optimization for real-time uav path planning,” IEEE Transactions on Industrial Informatics, vol. 9, no. 1, pp. 132–141, Feb 2013.

    Article  Google Scholar 

  31. M. A. Rodriguez and R. Buyya, “Deadline based resource provisioning and scheduling algorithm for scientific workflows on clouds,” IEEE Transactions on Cloud Computing, vol. 2, no. 2, pp. 222–235, April 2014.

    Article  Google Scholar 

  32. Y. L. Li, Z. H. Zhan, Y. J. Gong, J. Zhang, Y. Li, and Q. Li, “Fast micro-differential evolution for topological active net optimization,” IEEE Transactions on Cybernetics, vol. 46, no. 6, pp. 1411–1423, June 2016.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Srishti Srivastava .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG, part of Springer Nature

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Srivastava, S., Banicescu, I. (2018). Scheduling in Parallel and Distributed Computing Systems. In: Prasad, S., Gupta, A., Rosenberg, A., Sussman, A., Weems, C. (eds) Topics in Parallel and Distributed Computing. Springer, Cham. https://doi.org/10.1007/978-3-319-93109-8_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-93109-8_11

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-93108-1

  • Online ISBN: 978-3-319-93109-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics