Abstract
We consider energy-performance tradeoff for scheduling parallel jobs on multiprocessors using dynamic speed scaling. The objective is to minimize the sum of energy consumption and certain performance metric, including makespan and total flow time. We focus on designing algorithms that are aware of the jobs’ instantaneous parallelism but not their characteristics in the future. For total flow time plus energy, it is known that any algorithm that does not rely on instantaneous parallelism is Ω(ln 1/α P)-competitive, where P is the total number of processors. In this paper, we demonstrate the benefits of knowing instantaneous parallelism by presenting an O(1)-competitive algorithm. In the case of makespan plus energy, which is considered in the literature for the first time, we present an O(ln 1 − 1/α P)-competitive algorithm for batched jobs consisting of fully-parallel and sequential phases. We show that this algorithm is asymptotically optimal by providing a matching lower bound.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Albers, S.: Energy-efficient algorithms. Communications of the ACM 53(5), 86–96 (2010)
Albers, S., Fujiwara, H.: Energy-efficient algorithms for flow time minimization. In: Durand, B., Thomas, W. (eds.) STACS 2006. LNCS, vol. 3884, pp. 621–633. Springer, Heidelberg (2006)
Bansal, N., Chan, H.-L., Pruhs, K.: Speed scaling with an arbitrary power function. In: SODA, pp. 693–701 (2009)
Bansal, N., Pruhs, K., Stein, C.: Speed scaling for weighted flow time. In: SODA, pp. 805–813 (2007)
Brooks, D.M., Bose, P., Schuster, S.E., Jacobson, H., Kudva, P.N., Buyuktosunoglu, A., Wellman, J.-D., Zyuban, V., Gupta, M., Cook, P.W.: Power-aware microarchitecture: Design and modeling challenges for next-generation microprocessors. IEEE Micro 20(6), 26–44 (2000)
Chan, H.-L., Edmonds, J., Lam, T.-W., Lee, L.-K., Marchetti-Spaccamela, A., Pruhs, K.: Nonclairvoyant speed scaling for flow and energy. In: STACS 2009, pp. 409–420 (2009)
Chan, H.-L., Edmonds, J., Pruhs, K.: Speed scaling of processes with arbitrary speedup curves on a multiprocessor. In: SPAA, pp. 1–10 (2009)
Edmonds, J.: Scheduling in the dark. In: STOC, pp. 179–188 (1999)
Grunwald, D., Morrey III, C.B., Levis, P., Neufeld, M., Farkas, K.I.: Policies for dynamic clock scheduling. In: OSDI, pp. 6 (2000)
Irani, S., Pruhs, K.: Algorithmic problems in power management. SIGACT News 36(2), 63–76 (2005)
Lam, T.W., Lee, L.-K., To, I.K.-K., Wong, P.W.H.: Speed scaling functions for flow time scheduling based on active job count. In: Halperin, D., Mehlhorn, K. (eds.) ESA 2008. LNCS, vol. 5193, pp. 647–659. Springer, Heidelberg (2008)
Pruhs, K.R., van Stee, R., Uthaisombut, P.: Speed scaling of tasks with precedence constraints. In: Erlebach, T., Persinao, G. (eds.) WAOA 2005. LNCS, vol. 3879, pp. 307–319. Springer, Heidelberg (2006)
Robert, J., Schabanel, N.: Non-clairvoyant batch sets scheduling: Fairness is fair enough. In: Arge, L., Hoffmann, M., Welzl, E. (eds.) ESA 2007. LNCS, vol. 4698, pp. 741–753. Springer, Heidelberg (2007)
Sun, H., Cao, Y., Hsu, W.-J.: Non-clairvoyant speed scaling for batched parallel jobs on multiprocessors. In: CF, pp. 99–108 (2009)
Sun, H., He, Y., Hsu, W.-J.: Energy-Efficient Multiprocessor Scheduling for Flow Time and Makespan. CoRR abs/1010.4110 (2010)
Yao, F., Demers, A., Shenker, S.: A scheduling model for reduced CPU energy. In: FOCS, pp. 374–382 (1995)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2011 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Sun, H., He, Y., Hsu, WJ. (2011). Speed Scaling for Energy and Performance with Instantaneous Parallelism. In: Marchetti-Spaccamela, A., Segal, M. (eds) Theory and Practice of Algorithms in (Computer) Systems. TAPAS 2011. Lecture Notes in Computer Science, vol 6595. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-19754-3_24
Download citation
DOI: https://doi.org/10.1007/978-3-642-19754-3_24
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-19753-6
Online ISBN: 978-3-642-19754-3
eBook Packages: Computer ScienceComputer Science (R0)