Skip to main content

WALOR: Workload-Driven Adaptive Layout Optimization of Raft Groups for Heterogeneous Distributed Key-Value Stores

  • Conference paper
  • First Online:
Network and Parallel Computing (NPC 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13615))

Included in the following conference series:

Abstract

In a heterogeneous cluster based on the Raft protocol, in order to solve the problem of slow performance caused by the leader on a slow node, someone proposed ALOR. However, the leader distribution of ALOR is not optimal. In this paper, we propose Workload-driven Adaptive Layout Optimization of Raft groups (WALOR), which changes the leader distribution of ALOR to promote the performance further by more fitting the read-write request ratio of the system’s workload. Our experiments on an actual heterogeneous cluster show that, on average, WALOR improves throughput by 82.96% and 32.42% compared to the even distribution (ED) solution and ALOR, respectively.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Ongaro, D, Ousterhout, J.: In search of an understandable consensus algorithm. In: 2014 USENIX Annual Technical Conference (USENIXATC 14), pp. 305–319 (2014)

    Google Scholar 

  2. Ongaro, D.: Consensus: bridging theory and practice. Stanford University (2014)

    Google Scholar 

  3. Wang, Y., Chai, Y., Wang, X.: ALOR: adaptive layout optimization of raft groups for heterogeneous distributed key-Value stores. In: Zhang, F., Zhai, J., Snir, M., Jin, H., Kasahara, H., Valero, M. (eds.) NPC 2018. LNCS, vol. 11276, pp. 13–26. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-05677-3_2

    Chapter  Google Scholar 

  4. TiKV. https://github.com/tikv/tikv (2022)

  5. Cooper, B.F., et al.: Benchmarking cloud serving systems with YCSB. In: Proceedings of the 1st ACM symposium on Cloud computing, pp. 143–154. ACM (2010)

    Google Scholar 

  6. Lamport, L.: The part-time parliament. ACM Trans. Comput. Syst. (TOCS) 16(2), 133–169 (1998)

    Article  MATH  Google Scholar 

  7. Lamport, L.: Paxos made simple. ACM SIGACT News 32(4), 18–25 (2001)

    Google Scholar 

  8. Where can I get Raft? https://raft.github.io/#implementations (2022)

  9. Etcd. https://github.com/etcd-io/etcd (2022)

  10. Corbett, J.C., Dean, J., Epstein, M., et al.: Spanner: google’s globally distributed database. ACM Trans. Comput. Syst. (TOCS) 31(3), 1–22 (2013)

    Article  Google Scholar 

  11. Huang, D., et al.: TiDB: a Raft-based HTAP database. Proc. of the VLDB Endowment 13(12), 3072–3084 (2020)

    Article  Google Scholar 

  12. Cao, W., et al.: POLARDB meets computational storage: efficiently support analytical workloads in cloud-native relational database. In: FAST (2020)

    Google Scholar 

  13. Little, J.D.C.: A proof for the queuing formula: L= \(\lambda \)W. Oper. Res. 9(3), 383–387 (1961)

    Article  MathSciNet  MATH  Google Scholar 

  14. Little, J.D.C.: OR FORUM-Little’s Law as viewed on its 50th anniversary. Oper. Res. 59(3), 536–549 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  15. Liu, G., Wang, S., Bao, Y.: SEER: a time prediction model for CNNs from GPU kernel’s view. In: 2021 30th International Conference on Parallel Architectures and Compilation Techniques (PACT), pp. 173–185. In: IEEE (2021)

    Google Scholar 

  16. Volkov, V.: A microbenchmark to study GPU performance models. ACM SIGPLAN Not. 53(1), 421–422 (2018)

    Article  Google Scholar 

  17. go-ycsb. https://github.com/pingcap/go-ycsb (2022)

  18. Wang, C., et al.: Apus: fast and scalable PAXOS on RDMA In: Proceedings of the 2017 Symposium on Cloud Computing, pp. 94–107. ACM (2017)

    Google Scholar 

  19. Aguilera, M.K., et al.: Microsecond consensus for microsecond applications. In: Operating Systems Design and Implementation. USENIX ASSOC (2020)

    Google Scholar 

  20. Cao, W., Liu, Z., Wang, P., et al.: PolarFS: an ultra-low latency and failure resilient distributed file system for shared storage cloud database. Proc. VLDB Endowment 11(12), 1849–1862 (2018)

    Article  Google Scholar 

  21. Sakic, E., Kellerer, W.: Response time and availability study of RAFT consensus in distributed SDN control plane. IEEE Trans. Netw. Serv. Manag. 15(1), 304–318 (2017)

    Article  Google Scholar 

  22. Zhang, Y., et al.: When raft meets SDN: how to elect a leader and reach consensus in an unruly network. In: Proceedings of the First Asia-Pacific Workshop on Networking, pp. 1–7. ACM (2017)

    Google Scholar 

  23. Kim, T., et al.: Load balancing on distributed datastore in opendaylight SDN controller cluster. In: 2017 IEEE Conference on Network Softwarization (NetSoft), pp. 1–3. IEEE (2017)

    Google Scholar 

  24. Copeland, C., Zhong, H.: Tangaroa: a byzantine fault tolerant raft (2016)

    Google Scholar 

  25. Dadheech, P., et, al.: Performance improvement of heterogeneous cluster of big data using query optimization and mapreduce. In: International Conference on Information Management and Machine Intelligence (ICIMMI 2019) (2020)

    Google Scholar 

  26. Yuan, Y., et al.: A distributed in-memory key-value store system on heterogeneous CPU-GPU cluster. VLDB J. 26(5), 729–750 (2017)

    Article  Google Scholar 

  27. Kwon, Y., et al.: Strata: a cross media file system. In: The 26th Symposium (2017)

    Google Scholar 

  28. Kakoulli, E., Herodotou, H.: OctopusFS: a distributed file system with tiered storage management. In: The 2017 ACM International Conference. ACM (2017)

    Google Scholar 

Download references

Acknowledgement

This work is supported by the National Key Research and Development Program of China (No. 2019YFE0198600), National Natural Science Foundation of China (No. 61972402, 61972275, and 61732014).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yunpeng Chai .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 IFIP International Federation for Information Processing

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wang, Y., Chai, Y., Zhang, Q. (2022). WALOR: Workload-Driven Adaptive Layout Optimization of Raft Groups for Heterogeneous Distributed Key-Value Stores. In: Liu, S., Wei, X. (eds) Network and Parallel Computing. NPC 2022. Lecture Notes in Computer Science, vol 13615. Springer, Cham. https://doi.org/10.1007/978-3-031-21395-3_27

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-21395-3_27

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-21394-6

  • Online ISBN: 978-3-031-21395-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics