Skip to main content

Online File Caching on Multiple Caches in Latency-Sensitive Systems

  • Conference paper
  • First Online:
Computational Data and Social Networks (CSoNet 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13831))

Included in the following conference series:

  • 776 Accesses

Abstract

Motivated by the presence of multiple caches and the non-negligible fetching latency in practical scenarios, we study the online file caching problem on multiple caches in latency-sensitive systems, e.g., edge computing. Our goal is to minimize the total latency for all file requests, where a file request can be served by a hit locally, fetching from the cloud data center, a delayed hit, relaying to other caches, or bypassing to the cloud. We propose a file-weight-based algorithm, named OnMuLa, to support delayed hits, relaying and bypassing. We conduct extensive simulations on Google’ trace and a benchmark YCSB. The results show that our algorithms significantly outperform the existing methods consistently in various experimental settings. Compared with the state-of-the-art scheme supporting multiple caches and bypassing, OnMuLa  can reduce the latency by \(14.77\%\) in Google’s trace and \(49.69\%\) in YCSB.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Abdi, M., et al.: A community cache with complete information. In: USENIX FAST 2021, pp. 323–340 (2021)

    Google Scholar 

  2. Atre, N., Sherry, J., Wang, W., Berger, D.S.: Caching with delayed hits. In: ACM SIGCOMM (2020)

    Google Scholar 

  3. Beckmann, N., Chen, H., Cidon, A.: LHD: Improving cache hit rate by maximizing hit density. In: USENIX NSDI (2018)

    Google Scholar 

  4. Breslau, L., Cao, P., Fan, L., Phillips, G., Shenker, S.: Web caching and zipf-like distributions: evidence and implications. In: IEEE INFOCOM’99

    Google Scholar 

  5. Cooper, B.F., Silberstein, A., Tam, E., Ramakrishnan, R., Sears, R.: Benchmarking cloud serving systems with YCSB. In: ACM SoCC (2010)

    Google Scholar 

  6. Dilley, J., Maggs, B., Parikh, J., Prokop, H., Sitaraman, R., Weihl, B.: Globally distributed content delivery. IEEE Internet Comput. 6(5), 50–58 (2002)

    Article  Google Scholar 

  7. Epstein, L., Imreh, C., Levin, A., Nagy-György, J.: Online file caching with rejection penalties. Algorithmica 71(2), 279–306 (2015). https://doi.org/10.1007/s00453-013-9793-0

    Article  MathSciNet  MATH  Google Scholar 

  8. Fuerst, A., Sharma, P.: Faascache: keeping serverless computing alive with greedy-dual caching. In: ACM ASPLOS 2021, pp. 386–400 (2021)

    Google Scholar 

  9. Karlsson, M.: Cache memory design trade-offs for current and emerging workloads. Ph.D. thesis, Citeseer (2003)

    Google Scholar 

  10. Liang, Y., et al.: Cachesifter: sifting cache files for boosted mobile performance and lifetime. In: USENIX FAST 2022, pp. 445–459 (2022)

    Google Scholar 

  11. Megiddo, N., Modha, D.S.: Arc: A self-tuning, low overhead replacement cache. In: FAST 2003 (2003)

    Google Scholar 

  12. Pan, L., Wang, L., Chen, S., Liu, F.: Retention-aware container caching for serverless edge computing. In: IEEE INFOCOM 2022 (2022)

    Google Scholar 

  13. Ramanujam, M., Madhyastha, H.V., Netravali, R.: Marauder: synergized caching and prefetching for low-risk mobile app acceleration. In: MobiSys (2021)

    Google Scholar 

  14. Reiss, C., Wilkes, J., Hellerstein, J.: Google cluster-usage trace. In: Technical Report (2011)

    Google Scholar 

  15. Sleator, D.D., Tarjan, R.E.: Amortized efficiency of list update and paging rules. Commun. ACM 28(2), 202–208 (1985)

    Article  MathSciNet  Google Scholar 

  16. Tan, H., Jiang, S.H.C., Han, Z., Liu, L., Han, K., Zhao, Q.: Camul: online caching on multiple caches with relaying and bypassing. In: IEEE INFOCOM (2019)

    Google Scholar 

  17. Vietri, G., et al.: Driving cache replacement with ML-based LeCaR. In: HotStorage 2018 (2018)

    Google Scholar 

  18. Wang, J., Hu, Y.: Wolf-a novel reordering write buffer to boost the performance of log-structured file system. In: FAST 2002 (2002)

    Google Scholar 

  19. Wendell, P., Freedman, M.J.: Going viral: flash crowds in an open CDN. In: ACM/USENIX IMC (2011)

    Google Scholar 

  20. Yan, G., Li, J.: Towards latency awareness for content delivery network caching. In: USENIX ATC 2022, pp. 789–804 (2022)

    Google Scholar 

  21. Young, N.E.: On-line file caching. Algorithmica 33(3), 371–383 (2002). https://doi.org/10.1007/s00453-001-0124-5

    Article  MathSciNet  MATH  Google Scholar 

  22. Yuan, M., Zhang, L., He, F., Tong, X., Li, X.Y.: Infi: end-to-end learnable input filter for resource-efficient mobile-centric inference. In: MobiCom (2022)

    Google Scholar 

  23. Zhang, C., Tan, H., Li, G., Han, Z., Jiang, S.H.C., Li, X.Y.: Online file caching in latency-sensitive systems with delayed hits and bypassing. In: IEEE INFOCOM 2022, pp. 1059–1068. IEEE (2022)

    Google Scholar 

Download references

Acknowledgements

The work is partially supported by NSFC under Grant 62132009, and the Fundamental Research Funds for the Central Universities at China.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Haisheng Tan .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Li, G., Zhang, C., Ni, H., Tan, H. (2023). Online File Caching on Multiple Caches in Latency-Sensitive Systems. In: Dinh, T.N., Li, M. (eds) Computational Data and Social Networks . CSoNet 2022. Lecture Notes in Computer Science, vol 13831. Springer, Cham. https://doi.org/10.1007/978-3-031-26303-3_26

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-26303-3_26

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-26302-6

  • Online ISBN: 978-3-031-26303-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics