Skip to main content

Advertisement

Log in

SLA-aware data migration in a shared hybrid storage cluster

  • Published:
Cluster Computing Aims and scope Submit manuscript

Abstract

Data volume in today’s world has been tremendously increased. Large-scaled and diverse data sets are raising new big challenges of storage, process, and query. Particularly, real-time data analysis becomes more and more frequently. Multi-tiered, hybrid storage architectures, which provide a solid way to combine solid-state drives with hard disk drives (HDDs), therefore become attractive in enterprise data centers for achieving high performance and large capacity simultaneously. However, from service provider’s perspective, how to efficiently manage all the data hosted in data center in order to provide high quality of service (QoS) is still a core and difficult problem. The modern enterprise data centers often provide the shared storage resources to a large variety of applications which might demand for different service level agreements (SLAs). Furthermore, any user query from a data-intensive application could easily trigger a scan of a gigantic data set and inject a burst of disk I/Os to the back-end storage system, which will eventually cause disastrous performance degradation. Therefore, in the paper, we present a new approach for automated data movement in multi-tiered, hybrid storage clusters, which lively migrates the data among different storage media devices, aiming to support multiple SLAs for applications with dynamic workloads at the minimal cost. Detailed trace-driven simulations show that this new approach significantly improves the overall performance, providing higher QoS for applications and reducing the occurrence of SLA violations. Sensitivity analysis under different system environments further validates the effectiveness and robustness of the approach.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Notes

  1. We remark that the setting of \(t_{win}\) depends on how frequently the workload changes. If the workload changes fast, then a small \(t_{win}\) is preferred, vice versa.

  2. An I/O request response time is measured from the moment when an I/O request is submitted to the moment when that I/O request finishes.

References

  1. Anderson, E., Hall, J., Hartline, J., Hobbs, M., Karlin, A.R., Saia, J., Swaminathan, R., Wilkes, J.: An experimental study of data migration algorithms. In: Proceedings of the Workshop on Algorithm Engineering, pp. 145–158. Springer, London (2001)

  2. Elmore, A.J., Das, S., Agrawal, D., El Abbadi, A.: Zephyr: live migration in shared nothing databases for elastic cloud platforms. In: Proceedings of the 2011 ACM SIGMOD International Conference on Management of data, pp. 301–312. ACM, New York (2011)

  3. Emc, FAST. http://www.emc.com/products/launch/fast/

  4. Guerra, J., Pucha, H., Glider, J., Belluomini, W., Rangaswami, R.: Cost effective storage using extent based dynamic tiering. In: Proceedings of the 9st USENIX Conference on FAST’11, pp. 20–20. ACM, San Jose, CA (2011)

  5. HP 3PAR Adaptive Optimization Software. http://h18006.www1.hp.com/storage/software/3par/aos/index.html

  6. IBM DS8000. http://www-03.ibm.com/systems/storage/disk/ds8000/

  7. Karlsson, M., Karamanolis, C., Zhu, X.: Triage: performance isolation and differentiation for storage systems. In: Proceedings of the Twelfth IEEE International Workshop on Quality of Service, Palo Alto, CA, pp. 67–74. IEEE (2004)

  8. Khuller, S., Kim, Y., Wan, Y.: Algorithms for data migration with cloning. In: Proceedings of the Twenty-Second ACM SIGMOD-SIGACT-SIGART Symposium on Principles of Database Systems, pp. 27–36. ACM, San Diego, CA (2003)

  9. Laliberte, B.: Automate and Optimize a Tiered Storage Environment-FAST! White Paper (2009). http://www.emc.com/collateral/analyst-reports/esg-20091208-fast.pdf

  10. Little, J.D.C.: A proof for the queuing formula: \(\text{ L }=\text{ w }\). Oper. Res. 9(3), 383–387 (1961)

    Article  MathSciNet  Google Scholar 

  11. Lu, C., Alvarez, G.A., Wilkes, J.: Aqueduct: online data migration with performance guarantees. In: Proceedings of the 1st USENIX Conference on FAST’02, pp. 219–230. ACM, Monterey, CA (2002)

  12. Lundell, B., Gahm, J., McKnight, J.: 2011 IT Spending Intentions Survey. Research Report (2011). http://www.enterprisestrategygroup.com/2011/01/2011-it-spending-intentions-survey/

  13. Narayanan, D., Donnelly, A., Rowstron, A.: Write off-loading: practical power management for enterprise storage. ACM Trans. Storage 4(3), 10:1–10:23 (2008)

    Article  Google Scholar 

  14. Riska, A., Riedel, E.: Long-range dependence at the disk drive level. In: Proceedings of theThird International Conference on Quantitative Evaluation of Systems, QEST 2006, pp. 41–50. IEEE (2006)

  15. Seo, B., Zimmermann, R.: Efficient disk replacement and data migration algorithms for large disk subsystems. ACM Trans. Storage 1(3), 316–345 (2005)

    Article  Google Scholar 

  16. Sundaram, V., Shenoy, P.: Efficient data migration in self-managing storage systems. In: Proceedings of the IEEE International Conference on Autonomic Computing, Dublin, pp. 297–300 (2006)

  17. VMware vCenter Server. http://www.vmware.com/products/vcenter-server/overview.html

  18. Wang, K., Lin, M., Ciucu, F.: Characterizing the impact of the workload on the value of dynamic resizing in data centers. Perform. Eval. Rev. 40(1), 405–406 (2014)

    Article  Google Scholar 

  19. Zhang, G., Chiu, L., Liu, L.: Adaptive data migration in multi-tiered storage based cloud environment. In: Proceedings of the IEEE 3rd International Conference on Cloud Computing, Miami, FL, pp. 148–155. IEEE (2010)

Download references

Acknowledgments

This work was partially supported by NSF Grant CNS-1251129 and IBM Faculty Award.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ningfang Mi.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tai, J., Sheng, B., Yao, Y. et al. SLA-aware data migration in a shared hybrid storage cluster. Cluster Comput 18, 1581–1593 (2015). https://doi.org/10.1007/s10586-015-0461-9

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10586-015-0461-9

Keywords

Navigation