Abstract
Motivated by the presence of multiple caches and the non-negligible fetching latency in practical scenarios, we study the online file caching problem on multiple caches in latency-sensitive systems, e.g., edge computing. Our goal is to minimize the total latency for all file requests, where a file request can be served by a hit locally, fetching from the cloud data center, a delayed hit, relaying to other caches, or bypassing to the cloud. We propose a file-weight-based algorithm, named OnMuLa, to support delayed hits, relaying and bypassing. We conduct extensive simulations on Google’ trace and a benchmark YCSB. The results show that our algorithms significantly outperform the existing methods consistently in various experimental settings. Compared with the state-of-the-art scheme supporting multiple caches and bypassing, OnMuLa can reduce the latency by \(14.77\%\) in Google’s trace and \(49.69\%\) in YCSB.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Abdi, M., et al.: A community cache with complete information. In: USENIX FAST 2021, pp. 323–340 (2021)
Atre, N., Sherry, J., Wang, W., Berger, D.S.: Caching with delayed hits. In: ACM SIGCOMM (2020)
Beckmann, N., Chen, H., Cidon, A.: LHD: Improving cache hit rate by maximizing hit density. In: USENIX NSDI (2018)
Breslau, L., Cao, P., Fan, L., Phillips, G., Shenker, S.: Web caching and zipf-like distributions: evidence and implications. In: IEEE INFOCOM’99
Cooper, B.F., Silberstein, A., Tam, E., Ramakrishnan, R., Sears, R.: Benchmarking cloud serving systems with YCSB. In: ACM SoCC (2010)
Dilley, J., Maggs, B., Parikh, J., Prokop, H., Sitaraman, R., Weihl, B.: Globally distributed content delivery. IEEE Internet Comput. 6(5), 50–58 (2002)
Epstein, L., Imreh, C., Levin, A., Nagy-György, J.: Online file caching with rejection penalties. Algorithmica 71(2), 279–306 (2015). https://doi.org/10.1007/s00453-013-9793-0
Fuerst, A., Sharma, P.: Faascache: keeping serverless computing alive with greedy-dual caching. In: ACM ASPLOS 2021, pp. 386–400 (2021)
Karlsson, M.: Cache memory design trade-offs for current and emerging workloads. Ph.D. thesis, Citeseer (2003)
Liang, Y., et al.: Cachesifter: sifting cache files for boosted mobile performance and lifetime. In: USENIX FAST 2022, pp. 445–459 (2022)
Megiddo, N., Modha, D.S.: Arc: A self-tuning, low overhead replacement cache. In: FAST 2003 (2003)
Pan, L., Wang, L., Chen, S., Liu, F.: Retention-aware container caching for serverless edge computing. In: IEEE INFOCOM 2022 (2022)
Ramanujam, M., Madhyastha, H.V., Netravali, R.: Marauder: synergized caching and prefetching for low-risk mobile app acceleration. In: MobiSys (2021)
Reiss, C., Wilkes, J., Hellerstein, J.: Google cluster-usage trace. In: Technical Report (2011)
Sleator, D.D., Tarjan, R.E.: Amortized efficiency of list update and paging rules. Commun. ACM 28(2), 202–208 (1985)
Tan, H., Jiang, S.H.C., Han, Z., Liu, L., Han, K., Zhao, Q.: Camul: online caching on multiple caches with relaying and bypassing. In: IEEE INFOCOM (2019)
Vietri, G., et al.: Driving cache replacement with ML-based LeCaR. In: HotStorage 2018 (2018)
Wang, J., Hu, Y.: Wolf-a novel reordering write buffer to boost the performance of log-structured file system. In: FAST 2002 (2002)
Wendell, P., Freedman, M.J.: Going viral: flash crowds in an open CDN. In: ACM/USENIX IMC (2011)
Yan, G., Li, J.: Towards latency awareness for content delivery network caching. In: USENIX ATC 2022, pp. 789–804 (2022)
Young, N.E.: On-line file caching. Algorithmica 33(3), 371–383 (2002). https://doi.org/10.1007/s00453-001-0124-5
Yuan, M., Zhang, L., He, F., Tong, X., Li, X.Y.: Infi: end-to-end learnable input filter for resource-efficient mobile-centric inference. In: MobiCom (2022)
Zhang, C., Tan, H., Li, G., Han, Z., Jiang, S.H.C., Li, X.Y.: Online file caching in latency-sensitive systems with delayed hits and bypassing. In: IEEE INFOCOM 2022, pp. 1059–1068. IEEE (2022)
Acknowledgements
The work is partially supported by NSFC under Grant 62132009, and the Fundamental Research Funds for the Central Universities at China.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Li, G., Zhang, C., Ni, H., Tan, H. (2023). Online File Caching on Multiple Caches in Latency-Sensitive Systems. In: Dinh, T.N., Li, M. (eds) Computational Data and Social Networks . CSoNet 2022. Lecture Notes in Computer Science, vol 13831. Springer, Cham. https://doi.org/10.1007/978-3-031-26303-3_26
Download citation
DOI: https://doi.org/10.1007/978-3-031-26303-3_26
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-26302-6
Online ISBN: 978-3-031-26303-3
eBook Packages: Computer ScienceComputer Science (R0)