Skip to main content
Log in

SEParAT: scheduling support environment for parallel application task graphs

  • Published:
Cluster Computing Aims and scope Submit manuscript

Abstract

Programs using parallel tasks can be represented by task graphs so that scheduling algorithms can be used to find an efficient execution order of the parallel tasks. This article proposes a flexible, component-based and extensible scheduling framework called SEParAT that supports the scheduling of a parallel program in multiple ways. The article describes the functionality and the software architecture of SEParAT. The flexible interfaces enable the cooperation with other programming tools, e.g., tools exploiting a specification of the parallel task structure of an application. The core component of SEParAT is an extensible scheduling algorithm library that provides an infrastructure to determine efficient schedules for task graphs. Homogeneous as well as heterogeneous platforms can be handled. The article also includes detailed experimental results comprising the evaluation of SEParAT as well as the evaluation of a variety of scheduling algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19

Similar content being viewed by others

References

  1. Ahmad, I., Kwok, Y.-K., Wu, M.-Y., Shu, W.: Casch: a tool for computer-aided scheduling. IEEE Concurrency 8, 21–33 (2000)

    Article  Google Scholar 

  2. Alexandrov, A., Ionescu, M.F., Schauser, K.E., Scheiman, C.: LogGP: incorporating long messages into the LogP model for parallel computation. Journ. of Parallel and Distr. Computing 44(1), 71–79 (1997)

    Article  Google Scholar 

  3. Baduel, L., Baude, F., Caromel, D., Contes, A., Huet, F., Morel, M., Quilici, R.: Grid Computing: Software Environments and Tools, Chapter: Programming, Deploying, Composing, for the Grid, pp. 205–229. Springer, Berlin (2006)

    Google Scholar 

  4. Banerjee, P., Chandy, J., Gupta, M., Hodge, E., Holm, J., Lain, A., Palermo, D., Ramaswamy, S., Su, E.: The paradigm compiler for distributed-memory multicomputers. IEEE Comp. 28(10), 37–47 (1995)

    Article  Google Scholar 

  5. Bansal, S., Kumar, P., Singh, K.: An improved two-step algorithm for task and data parallel scheduling in distributed memory machines. Parallel Computing 32(10), 759–774 (2006)

    Article  MathSciNet  Google Scholar 

  6. Barker, K.J., Davis, K., Hoisie, A., Kerbyson, D.J., Lang, M., Pakin, S., Sancho, J.C.: Using performance modeling to design large-scale systems. IEEE Comput. 42(11), 42–49 (2009)

    Article  Google Scholar 

  7. Boudet, V., Desprez, F., Suter, F.: One-step algorithm for mixed data and task parallel scheduling without data replication. In: Proc. of 17th Int. Symp. on Parallel and Distr. Processing IPDPS’03:, p. 41.2. IEEE Comput. Soc., Los Alamitos (2003)

    Google Scholar 

  8. Cao, J., Chan, A.T., Sun, Y., Das, S.K., Guo, M.: A taxonomy of application scheduling tools for high performance cluster computing. Cluster Computing 9(3), 355–371 (2006)

    Article  Google Scholar 

  9. Casanova, H., Desprez, F., Suter, F.: From heterogeneous task scheduling to heterogeneous mixed parallel scheduling. In: Danelutto, M., Laforenza, D., Vanneschi, M. (eds.) Proc. of 10th Int. Euro-Par Conf. (Euro-Par’04), Pisa, Italy, August/September 2004. Lecture Notes in Computer Science, vol. 3149, pp. 230–237. Springer, Berlin (2004)

    Google Scholar 

  10. Culler, D.E., Karp, R.M., Patterson, D.A., Sahay, A., Schauser, K.E., Santos, E., Subramonian, R., von Eicken, T.: LogP: towards a realistic model of parallel computation. In: Principles Practice of Parallel Programming, pp. 1–12 (1993)

    Google Scholar 

  11. Du, J., Leung, J.Y.-T.: Complexity of scheduling parallel task systems. SIAM Journal on Discrete Mathematics 2(4), 473–487 (1989)

    Article  MathSciNet  MATH  Google Scholar 

  12. Dümmler, J., Kunis, R., Rünger, G.: A comparison of scheduling algorithms for multiprocessortasks with precedence constraints. In: Proc. of the 2007 High Performance Computing & Simulation (HPCS’07) Conference, pp. 663–669 (2007). ECMS

    Google Scholar 

  13. Dümmler, J., Kunis, R., Rünger, G.: A scheduling toolkit for multiprocessor-task programming with dependencies. In: Proc. of the 13th International Euro-Par Conference. Lecture Notes in Computer Science, vol. 4641, pp. 23–32. Springer, Berlin (2007)

    Google Scholar 

  14. Dümmler, J., Kunis, R., Rünger, G.: Layer-based scheduling algorithms for multiprocessor-tasks with precedence constraints. In: Parallel Computing: Architectures, Algorithms and Applications: Proc. of the Int. Conf. ParCo 2007. Advances in Parallel Computing, vol. 15, pp. 321–328 (2007). IOS Press

    Google Scholar 

  15. Dümmler, J., Rauber, T., Rünger, G.: A transformation framework for communicating multiprocessor-tasks. In: Proc. of the 16th Euromicro International Conference on Parallel, Distributed and Network-Based Processing (PDP’08), pp. 64–71. IEEE Press, New York (2008)

    Google Scholar 

  16. Frachtenberg, E., Schiegelshohn, U.: New challenges of parallel job scheduling. In: Frachtenberg, E., Schwiegelshohn, U. (eds.) Proceedings of the 13th Job Scheduling Strategies for Parallel Processing. Lecture Notes in Computer Science, vol. 4942, pp. 1–23. Springer, Berlin (2008)

    Chapter  Google Scholar 

  17. Hill, M., McColl, W., Skillicorn, D.: Questions and answers about BSP. Scientific Programming 6(3), 249–274 (1997)

    Google Scholar 

  18. Kühnemann, M., Rauber, T., Rünger, G.: Performance modelling for task-parallel programs. In: Performance Analysis and Grid Computing, pp. 77–91. Kluwer, Norwell (2004)

    Chapter  Google Scholar 

  19. Kunis, R., Rünger, G.: Optimizing layer-based scheduling algorithms for parallel tasks with dependencies. Concurrency and Computation: Practice and Experience 23(8), 827–849 (2011)

    Article  Google Scholar 

  20. Kwon, O.-K., Hahm, J., Kim, S., Lee, J.R.: Grasp: a grid resource allocation system based on ogsa. In: 13th Int. Symp. on High-Performance Distr. Comp, pp. 278–279 (2004)

    Google Scholar 

  21. Lepere, R., Mounie, G., Trystram, D.: An approximation algorithm for scheduling trees of malleable tasks. European Journ. of Operational Research 142, 242–249 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  22. Lepere, R., Trystram, D., Woeginger, G.J.: Approximation algorithms for scheduling malleable tasks under precedence constraints. Int. Journ. of Foundation of Comp. Sci. 13(4), 613–627 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  23. Lewis, T., El-Rewini, H.: Parallax: a tool for parallel program scheduling. IEEE Parallel Distr. Technol. 1(2), 62–72 (1993)

    Article  Google Scholar 

  24. Ludwig, W., Tiwari, P.: Scheduling malleable and nonmalleable parallel tasks. In: Proc. of 5th Annual ACM-SIAM Symp. on Discrete Algorithms SODA’94, pp. 167–176. SIAM, Philadelphia (1994)

    Google Scholar 

  25. Mounie, G., Rapine, C., Trystram, D.: Efficient approximation algorithms for scheduling malleable tasks. In: SPAA’99: Proc. of 11th Annual ACM Symp. on Parallel Algorithms and Architectures, pp. 23–32. ACM, New York (1999)

    Chapter  Google Scholar 

  26. Mounie, G., Rapine, C., Trystram, D.: A \(\frac{3}{2}\)-approximation algorithm for scheduling independent monotonic malleable tasks. SIAM Journ. on Computing 37(2), 401–412 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  27. N’Takpe, T., Suter, F.: Critical path and area based scheduling of parallel task graphs on heterogeneous platforms. In: Proc. of 12th Int. Conf. on Parallel and Distr. Syst. (ICPADS06), Washington, DC, USA, pp. 3–10. IEEE Comput. Soc., Los Alamitos (2006)

    Google Scholar 

  28. N’Takpe, T., Suter, F., Casanova, H.: A comparison of scheduling approaches for mixed-parallel applications on heterogeneous platforms. In: Proc. of 6th Int. Symp. on Parallel and Distr. Computing (ISPDC’07), pp. 35–42. IEEE Comput. Soc., Los Alamitos (2007)

    Chapter  Google Scholar 

  29. Radulescu, A., Nicolescu, C., van Gemund, A.J.C., Jonker, P.: CPR: mixed task and data parallel scheduling for distributed systems. In: Proc. of 15th Int. Parallel & Distr. Processing Symp. (IPDPS01), pp. 39–48. IEEE Comput. Soc., Los Alamitos (2001)

    Google Scholar 

  30. Radulescu, A., van Gemund, A.J.C.: A low-cost approach towards mixed task and data parallel scheduling. In: Proc. of 2001 Int. Conf. on Parallel Processing, pp. 69–76. IEEE Comput. Soc., Los Alamitos (2001)

    Chapter  Google Scholar 

  31. Ramaswamy, S., Sapatnekar, S., Banerjee, P.: A framework for exploiting task and data parallelism on distributed memory multicomputers. IEEE Trans. Parallel Distr. Syst. 8(11), 1098–1116 (1997)

    Article  Google Scholar 

  32. Rauber, T., Rünger, G.: Compiler support for task scheduling in hierarchical execution models. Journ. Syst. Archit. 45(6–7), 483–503 (1998)

    Google Scholar 

  33. Rauber, T., Rünger, G.: Scheduling of data parallel modules for scientific computing. In: Proc. of 9th SIAM Conf. on Parallel Processing for Scientific Computing (PPSC). SIAM, Philadelphia (1999)

    Google Scholar 

  34. Rauber, T., Rünger, G.: A transformation approach to derive efficient parallel implementations. IEEE Transactions on Software Engineering 26(4), 315–339 (2000)

    Article  Google Scholar 

  35. Sips, H.J., van Reeuwijk, K.: An integrated annotation and compilation framework for task and data parallel programming in Java. In: Parallel Computing: Software Technology, Algorithms, Architectures and Applications (PARCO 2003), Dresden, Germany, pp. 111–118. Elsevier, Amsterdam (2003)

    Google Scholar 

  36. Tannenbaum, T., Wright, D., Miller, K., Livny, M.: Condor—a distributed job scheduler. In: Sterling, T. (ed.) Beowulf Cluster Computing with Linux. MIT Press, Cambridge (2001)

    Google Scholar 

  37. Topcuoglu, H., Hariri, S., Wu, M.-Y.: Task scheduling algorithms for heterogeneous processors. In: Proc. of 8th Heterogeneous Computing Workshop, HCW’99, Washington, DC, USA, p. 3. IEEE Comput. Soc., Los Alamitos (1999)

    Chapter  Google Scholar 

  38. van Gemund, A.J.C.: Symbolic performance modeling of parallel systems. IEEE Trans. Parallel Distrib. Syst. 14(2), 154–165 (2003)

    Article  Google Scholar 

  39. Wu, M.Y., Gajski, D.D.: Hypertool: a programming aid for message-passing systems. IEEE Trans. Parallel Distr. Syst. 1(3), 330–343 (1990)

    Article  Google Scholar 

  40. Yang, T., Gerasoulis, A.: Pyrros: static task scheduling and code generation for message passing multiprocessors. In: 6th ACM Int. Conf. on Supercomputing, pp. 428–437 (1992)

    Google Scholar 

Download references

Acknowledgements

This project was supported by Deutsche Forschungsgemeinschaft (DFG) grants RU591/9-1 and RU591/9-2.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jörg Dümmler.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Dümmler, J., Kunis, R. & Rünger, G. SEParAT: scheduling support environment for parallel application task graphs. Cluster Comput 15, 223–238 (2012). https://doi.org/10.1007/s10586-012-0211-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10586-012-0211-1

Keywords

Navigation