Skip to main content

Long-Term Fairness Scheduler for Pay-as-You-Use Cache Sharing Systems

  • Conference paper
  • First Online:
Algorithms and Architectures for Parallel Processing (ICA3PP 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13777))

  • 1392 Accesses

Abstract

Currently, pay-as-you-go cache systems have been widely available as storage services in cloud computing, and users usually purchase long-term services to obtain higher discounts. However, users’ caching needs are not only constantly changing over time, but also affected by workload characteristics, making it difficult to always guarantee high efficiency of cache resource usage. Cache sharing is an effective way to improve cache usage efficiency. In order to incentivize users to share resources, it is necessary to ensure long-term fairness among users. However, the traditional resource allocation strategy only guarantees instantaneous fairness and is not thus suitable for pay-as-you-go cache systems. This paper proposes a long-term cache fairness allocation policy, named as FairCache, with several desired properties. First, FairCache encourages users to buy and share cache resources through group purchasing, which not only allows users to get more resources than when they buy them individually, but also encourages them to lend free resources or resources occupied by low-frequency data to others to get more revenue in the future. Second, FairCache satisfies pay-as-you-go fairness, ensuring that users’ revenue is proportional to the cost paid in a long term. Furthermore, FairCache satisfies truthfulness property, which ensures that no one can get more resources by lying. Finally, FairCache satisfies pareto efficiency property, ensuring that as long as there are tasks in progress, the system will maximize resource utilization. We implement FairCache in Alluxio, and the experimental results show that FairCache can guarantee long-term cache fairness while maximizing the efficiency of system resource usage.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Wang, G., Ng, T.S.E.: The impact of virtualization on network performance of amazon ec2 data center. In: 2010 Proceedings IEEE INFOCOM, pp. 1–9 (2010)

    Google Scholar 

  2. Wu, J., et al.: A benchmark test of boson sampling on Tianhe-2 supercomputer. Natl. Sci. Rev. 5, 715–720 (2018)

    Article  Google Scholar 

  3. Zaharia, M., Chowdhury, M., Franklin, M.J., Shenker, S., Stoica, I.: Spark: cluster computing with working sets. In: 2nd USENIX Workshop on Hot Topics in Cloud Computing (HotCloud 10) (2010)

    Google Scholar 

  4. Apache.Hadoop. https://hadoop.apache.org/

  5. Palankar, M.R., Iamnitchi, A., Ripeanu, M., Garfinkel, S.: Amazon S3 for science grids: a viable solution. In: Proceedings of the 2008 International Workshop on Data-Aware Distributed Computing, pp. 55–64, June 2008

    Google Scholar 

  6. Carlson, J.: Redis in Action. Simon and Schuster (2013)

    Google Scholar 

  7. Li, H., Ghodsi, A., Zaharia, M., Shenker, S., Stoica, I.: Tachyon: reliable, memory speed storage for cluster computing frameworks. In: Proceedings of the ACM Symposium on Cloud Computing, pp. 1–15, November 2014

    Google Scholar 

  8. Cleverley, W.O., Nutt, P.C.: The effectiveness of group-purchasing organizations. Health Serv. Res. 19, 65 (1984)

    Google Scholar 

  9. Ghodsi, A., Zaharia, M., Hindman, B., Konwinski, A., Shenker, S., Stoica, I.: Dominant resource fairness: fair allocation of multiple resource types. In: 8th USENIX Symposium on Networked Systems Design and Implementation (NSDI 2011) (2011)

    Google Scholar 

  10. Ghodsi, A., Zaharia, M., Shenker, S., Stoica, I.: Choosy: max-min fair sharing for datacenter jobs with constraints. In: Proceedings of the 8th ACM European Conference on Computer Systems, pp. 365–378, April 2013

    Google Scholar 

  11. Apache.YARN. https://hadoop.apache.org/docs/current2/index.html/

  12. Hindman, B., et al.: Mesos: a platform for fine-grained resource sharing in the data center. In: 8th USENIX Symposium on Networked Systems Design and Implementation (NSDI 2011) (2011)

    Google Scholar 

  13. Jain, R.K., Chiu, D.M.W., Hawe, W.R.: A quantitative measure of fairness and discrimination. Eastern Research Laboratory, Digital Equipment Corporation, Hudson, May 1984

    Google Scholar 

  14. Zukerman, M., Tan, L., Wang, H., Ouveysi, I.: Efficiency-fairness tradeoff in telecommunications networks. IEEE Commun. Lett. 643–645 (2005)

    Google Scholar 

  15. Joe-Wong, C., Sen, S., Lan, T., Chiang, M.: Multiresource allocation: fairness-efficiency tradeoffs in a unifying framework. IEEE/ACM Trans. Network. 21, 1785–1798 (2013)

    Article  Google Scholar 

  16. Niu, Z.J., Tang, S.J., He, B.S.: An adaptive efficiency-fairness meta-scheduler for data-intensive computing. IEEE Trans. Serv. Comput. 12, 865–879 (2016)

    Article  Google Scholar 

  17. Grandl, R., Ananthanarayanan, G., Kandula, S., Rao, S., Akella, A.: Multi-resource packing for cluster schedulers. ACM SIGCOMM Comput. Commun. Rev. 44, 455–466 (2014)

    Article  Google Scholar 

  18. Tang, S.J., He, B.S., Zhang, S., Niu, Z.J.: Elastic multi-resource fairness: balancing fairness and efficiency in coupled CPU-GPU architectures. In: SC 2016: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, pp. 875–886. IEEE, November 2016

    Google Scholar 

  19. Beckmann, N., Chen, H., Cidon, A.: LHD: improving cache hit rate by maximizing hit density. In: 15th USENIX Symposium on Networked Systems Design and Implementation (NSDI 2018), pp. 389–403 (2018)

    Google Scholar 

  20. Kunjir, M., Fain, B., Munagala, K., Babu, S.: ROBUS: fair cache allocation for data-parallel workloads. In: Proceedings of the 2017 ACM International Conference on Management of Data, pp. 219–234, May 2017

    Google Scholar 

  21. Pu, Q., Li, H., Zaharia, M., Ghodsi, A., Stoica, I.: FairRide: near-optimal, fair cache sharing. In: 13th USENIX Symposium on Networked Systems Design and Implementation (NSDI 2016), pp. 393–406 (2016)

    Google Scholar 

  22. Yu, Y., Wang, W., Zhang, J., Weng, Q., Letaief, K.B.: Opus: fair and efficient cache sharing for in-memory data analytics. In: 2018 IEEE 38th International Conference on Distributed Computing Systems (ICDCS), pp. 154–164. IEEE (2018)

    Google Scholar 

  23. Tang, S.J., Chai, Q.F., Yu, C., Li, Y.S., Sun, C.: Balancing fairness and efficiency for cache sharing in semi-external memory system. In: 49th International Conference on Parallel Processing (ICPP), pp. 1–11, August 2020

    Google Scholar 

  24. Apache Hive performance benchmarks. https://issues.apache.org/jira/browse/HIVE-396/

  25. SWIM. https://github.com/SWIMProjectUCB/SWIM/

  26. Ahmad, F., Lee, S., Thottethodi, M., Vijaykumar, T.N.: Puma: purdue mapreduce benchmarks suite (2012)

    Google Scholar 

  27. PUMA. http://web.ics.purdue.edu/fahmad/benchmarks/datasets.htm/

  28. TPC-H. https://issues.apache.org/jira/browse/HIVE-600/

  29. Matani, D., Shah, K., Mitra, A.: An O (1) algorithm for implementing the LFU cache eviction scheme. arXiv preprint arXiv:2110.11602 (2021)

  30. Hasslinger, G., Heikkinen, J., Ntougias, K., Hasslinger, F., Hohlfeld, O.: Optimum caching versus LRU and LFU: comparison and combined limited look-ahead strategies. In: 2018 16th International Symposium on Modeling and Optimization in Mobile, Ad Hoc, and Wireless Networks (WiOpt), pp. 1–6. IEEE, May 2018

    Google Scholar 

  31. Rodriguez, L.V., et al.: Learning cache replacement with CACHEUS. In: 19th USENIX Conference on File and Storage Technologies (FAST 2021), pp. 341–354 (2021)

    Google Scholar 

  32. Choi, J., Gu, Y., Kim, J.: Learning-based dynamic cache management in a cloud. J. Parallel Distrib. Comput. 145, 98–110 (2020)

    Article  Google Scholar 

  33. Zahedi, S.M., Lee, B.C.: REF: resource elasticity fairness with sharing incentives for multiprocessors. ACM SIGPLAN Not. 49, 145–160 (2014)

    Article  Google Scholar 

  34. Tang, S.J., Yu, C., Li, Y.S.: Fairness-efficiency scheduling for cloud computing with soft fairness guarantees. In: IEEE Trans. Cloud Comput. (2020)

    Google Scholar 

Download references

Acknowledgements

This work was funded by National Key Research and Development Program of China (2020YFC1522702) and National Natural Science Foundation of China (61972277).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shanjiang Tang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhou, Z. et al. (2023). Long-Term Fairness Scheduler for Pay-as-You-Use Cache Sharing Systems. In: Meng, W., Lu, R., Min, G., Vaidya, J. (eds) Algorithms and Architectures for Parallel Processing. ICA3PP 2022. Lecture Notes in Computer Science, vol 13777. Springer, Cham. https://doi.org/10.1007/978-3-031-22677-9_18

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-22677-9_18

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-22676-2

  • Online ISBN: 978-3-031-22677-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics