Skip to main content
Log in

How much power is needed for a billion-thread high-throughput server?

  • Research Article
  • Published:
Frontiers of Computer Science Aims and scope Submit manuscript

Abstract

With the advent of Internet services, big data and cloud computing, high-throughput computing has generated much research interest, especially on high-throughput cloud servers. However, three basic questions are still not satisfactorily answered: (1) What are the basic metrics (what throughput and high-throughput of what)? (2) What are the main factors most beneficial to increasing throughput? (3) Are there any fundamental constraints and how high can the throughput go? This article addresses these issues by utilizing the fifty-year progress in Little’s law, to reveal three fundamental relations among the seven basic quantities of throughput (λ), number of active threads (L), waiting time (W), system power (P), thread energy (E), Watts per thread ω, threads per Joule θ. In addition to Little’s law L = λW, we obtain P = λE and λ = Lωθ, under reasonable assumptions. These equations help give a first order estimation of performance and power consumption targets for billion-thread cloud servers.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. Barroso L, Hoelzle U. The datacenter as a computer: an introduction to the design of warehouse-scale machines. Synthesis Lectures on Computer Architecture, 2009, 4(1): 1–108

    Article  Google Scholar 

  2. Cloudstone. http://radlab.cs.berkeley.edu/wiki/projects/cloudstone

  3. Little J. Little’s law as viewed on its 50th anniversary. Operational Research, 2011, 59(3): 536–549

    MathSciNet  MATH  Google Scholar 

  4. Little J, Graves S. Little’s law. In: Chhajed D, Lowe T J, eds. Building Intuition: Insights from Basic Operations Management Models and Principles. New York: Springer Science and Business Media LLC, 2008

    Google Scholar 

  5. Garland M, Kirk D. Understanding throughput-oriented architectures. Communications of the ACM, 2010, 53(11): 58–66

    Article  Google Scholar 

  6. Hanlon C. A conversation with john hennessy and david patterson. ACM Queue, 2006–2007, 4(10): 14–22

    Article  Google Scholar 

  7. Brumelle S L. On the relation between customer and time averages queues. Journal of Applied Probability, 1971, 8(3): 508–520

    Article  MathSciNet  MATH  Google Scholar 

  8. Heyman D, Stidham S J. The relation between customer and time averages in queues. Operational Research, 1980, 28(4): 983–994

    MathSciNet  MATH  Google Scholar 

  9. Glanz J. Google details, and defends, its use of electricity. The New York Times, 2011

  10. High-throughput computing. http://research.cs.wisc.edu/condor/htc.html. see also http://en.wikipedia.org/wiki/condor_high-throughput_computing_system and http://en.wikipedia.org/wiki/high-throughput_computing

  11. Many-task computing. http://en.wikipedia.org/wiki/Many-task_computing

  12. Little J. A proof for the queuing formula: L = λW. Operational Research, 1961, 9(3): 383–387

    MathSciNet  MATH  Google Scholar 

  13. El-Taha M, Stidham S. Sample-Path Analysis of Queueing Systems. Springer Netherlands, 1999

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhiwei Xu.

Additional information

Zhiwei Xu is a professor and CTO of the Institute of Computing Technology of the Chinese Academy of Sciences. His research interests include network computing science and Internet operating systems. His prior industrial experience includs chief engineer of Dawning Corp., a leading high-performance computer vendor in China. He currently leads “Cloud-Sea Computing Systems”, a ten-year research project of the Chinese Academy of Sciences that aims at developing billion-thread computers with elastic processors by 2020. Xu holds a PhD from the University of Southern California.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Xu, Z. How much power is needed for a billion-thread high-throughput server?. Front. Comput. Sci. 6, 339–346 (2012). https://doi.org/10.1007/s11704-012-2071-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11704-012-2071-5

Keywords

Navigation