Skip to main content

Advertisement

Log in

\(\upmu \mathrm{DC}^2\): unified data collection for data centers

  • Published:
The Journal of Supercomputing Aims and scope Submit manuscript

Abstract

Modern data centers are playing an important role in a world full of information and communication technologies (ICTs). Many efforts have been paid to build a more efficient, cleaner data center for economic, social, and environmental benefits. This objective is being enabled by emerging technologies such as cloud computing and software-defined networking (SDN). However, a data center is inherently heterogeneous, consisting of servers, networking devices, cooling devices, power supply devices, etc., resulting in daunting challenges in its management and control. Previous approaches typically focus on only a single domain, for example, traditional cloud computing for server resource (e.g., computing resource and storage resource) management and SDN for network management. In a similar context of networking device heterogeneity, network function virtualization has been proposed to offer a standard abstract interface to manage all networking devices. In this research, we take the challenge of building a suit of unified middleware to monitor and control the three intrinsic subsystems in a data centre, including ICT, power, and cooling. Specifically, we present \(\upmu \mathrm{DC}^2\), a unified scalable IP-based data collection system for data center management with elevated extensibility, as an initial step to offer a unified platform for data center operations. Our system consists of three main parts, i.e., data-source adapters for information collection over various subsystems in a data center, a unified message bus for data transferring, and a high-performance database for persistent data storage. We have conducted performance benchmark for the key building components, namely messaging server and database, confirming that our system is scalable for a data center with high device density and real-time management requirements. Key features, such as configuration files, dynamical module loading, and data compression, enhance our implementation with high extensibility and performance. The effectiveness of our proposed data collection system is verified by sample applications, such as, traffic flow migration for load balancing, VM migration for resource reservation, and server power management for hardware safety. This research lays out a foundation for a unified data centre management in future.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Notes

  1. OpenStack: https://www.openstack.org/.

  2. CloudStack: http://cloudstack.apache.org/.

  3. Extensible messaging and presence protocol (XMPP): http://xmpp.org/.

  4. MongoDB: http://www.mongodb.org/.

  5. Net-SNMP: http://www.net-snmp.org/.

  6. Intelligent platform management interface (IPMI: http://www.intel.com/content/www/us/en/servers/ipmi/ipmi-home.html) is a standard protocol for monitoring and controlling servers; however, IMPI implementations like iDRAC provide the monitoring status data via SNMP.

  7. libvirt: http://libvirt.org/.

  8. libgtop: https://developer.gnome.org/libgtop/stable/.

  9. iptables: http://www.netfilter.org/.

  10. NVIDIA Management Library (NVML): https://developer.nvidia.com/nvidia-management-library-nvml.

  11. Extensible markup language (XML): http://www.w3.org/XML/.

  12. JavaScript object notation (JSON): http://www.json.org/.

  13. ejabberd: http://www.process-one.net/en/ejabberd/.

  14. gloox: http://camaya.net/gloox/.

  15. swiften: http://swift.im/swiften/.

  16. Crypto++ Library: http://www.cryptopp.com/.

  17. JSON Compression algorithms: http://web-resource-optimization.blogspot.sg/2011/06/json-compression-algorithms.html.

  18. RRDtool: http://oss.oetiker.ch/rrdtool/.

  19. Cacti: http://www.cacti.net/.

  20. Nagios: http://www.nagios.org/.

References

  1. James M (2008) Kaplan, William Forrest, and Noah Kindler. Revolutionizing Data Center Energy Efficiency, Report

  2. Koomey JG (2011) Growth in data center electricity use 2005 to 2010. Report, August

  3. Mell P, Grance T (2011) The NIST definition of cloud computing (draft). NIST Special Publ 800(145):7

  4. Software-Defined Networking: The New Norm for Networks. White Paper, April 2012

  5. McKeown N, Anderson T, Balakrishnan H, Parulkar G, Peterson L, Rexford J, Shenker S, Turner J (March 2008) OpenFlow: enabling innovation in campus networks. SIGCOMM Comput Commun Rev 38(2):69–74

  6. Heller B, Seetharaman S , Mahadevan P, Yiakoumis Y, Sharma P, Banerjee S, McKeown N (2010) ElasticTree: saving energy in data center networks. In Proceedings of the 7th USENIX conference on Networked systems design and implementation, NSDI’10, Berkeley, CA, USA, 2010. USENIX Association

  7. Handigol N, Seetharaman S, Flajslik M, McKeown N, Johari R (2009) Plug-n-Serve: Load-balancing web traffic using OpenFlow. ACM SIGCOMM Demo

  8. Wang G, Eugene Ng TS, Shaikh A (2012) Programming your network at run-time for big data applications. In Proceedings of the first workshop on Hot topics in software defined networks, HotSDN’12. ACM, New York, NY, USA, pp 103–108

  9. Fang X, Misra S, Xue G, Yang D (2012) Smart grid—the new and improved power grid: a survey. IEEE Commun Surv Tutorials 14(4):944–980

  10. Network Functions Virtualisation, An Introduction, Benefits, Enablers, Challenges & Call for Action. White Paper, October 2012

  11. Massie ML, Chun BN, Culler DE (2004) The ganglia distributed monitoring system: design, implementation, and experience. Parallel Comput 30(7):817–840

  12. Germain-Renaud C. Cady A, Gauron P, Jouvin M, Loomis C, Martyniak J, Nauroy J, Philippon G, Sebag M (2011) The Grid Observatory. In: 2011 11th IEEE/ACM international symposium on cluster, cloud and grid computing (CCGrid), IEEE, pp 114–123

  13. Germain-Renaud C, Furst F, Jouvin M, Kassel G, Nauroy J, Philippon G (2011) The green computing observatory: a data curation approach for green it. In: 2011 IEEE ninth international conference on dependable, autonomic and secure computing (DASC), pp 798–799, Dec 2011

Download references

Acknowledgments

Haiyong Xie is supported in part by the National Natural Science Foundation of China (NSFC) under Grant No. 61073192, by the Grand Fundamental Research Program of China (973 Program) under Grant No. 2011CB302905, by the New Century Excellent Talents Program, and by the Fundamental Research Funds for Central Universities under Grant No. WK0110000014. Yonggang Wen was supported by Start-Up Grant from NTU, MOE Tier-1 Grant (RG 31/11) from Singapore MOE and EIRP02 Grant from Singapore EMA. Yonggang Wen also acknowledges the support by the Singapore National Research Foundation under its IDM Futures Funding Initiative and administered by the Interactive & Digital Media Programme Office, Media Development Authority. Wenfeng Xia is supported by NSFC Grant No. 61073192, 973 Program Grant No. 2011CB302905, and EIRP02 Grant from Singapore EMA.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wenfeng Xia.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xia, W., Wen, Y., Xie, H. et al. \(\upmu \mathrm{DC}^2\): unified data collection for data centers. J Supercomput 70, 1383–1404 (2014). https://doi.org/10.1007/s11227-014-1233-7

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11227-014-1233-7

Keywords

Navigation