Abstract:
In this work, we consider the problem of ‘fresh’ caching at distributed (front-end) local caches of content that is subject to ‘dynamic’ updates at the (back-end) databas...Show MoreMetadata
Abstract:
In this work, we consider the problem of ‘fresh’ caching at distributed (front-end) local caches of content that is subject to ‘dynamic’ updates at the (back-end) database. We first provide new models and analyses of the average operational cost of a network of distributed edge-caches that utilizes wireless multicast to refresh aging content. We attack the problems of what to cache in each edge-cache and how to split the incoming demand amongst them (also called “load-splitting” in the rest of the paper) in order to minimize the operational cost. While the general form of the problem comes with an NP-hard Knapsack structure, we were able to completely solve the problem by judiciously choosing the number of edge-caches to be deployed over the network This reduces the complex problem to a solvable special case. Interestingly, our findings reveal that the optimal caching policy necessitates unequal load-splitting over the edge-caches even when all conditions are symmetric. Moreover, we find that edge-caches with higher load will generally cache fewer but relatively more popular content. We further investigate the tradeoffs between cost reduction and cache savings when employing equal and optimal load-splitting solutions for demand with Zipf( z ) popularity distribution. Our analysis reveals that equal load-splitting to edge-caches achieves close-to-optimal for less predictable demand ( z< 2 ) while also saving in the cache size. On the other hand, for more predictable demand ( z>2 ), optimal load-splitting results in substantial cost gains while decreasing the cache occupancy.
Published in: IEEE/ACM Transactions on Networking ( Volume: 31, Issue: 5, October 2023)