Abstract
In this paper, we delve into the intricacies of a network architecture enabled by two layers of caches, where users receive their requested content via intermediate helpers connected to a central server. While coded caching in a two-layer hierarchical model has previously demonstrated its potential to enhance data rates when cache capacities are uniform and no coordination exists between users, our work takes a leap further. We introduce the dimension of heterogeneous cache sizes among both the helpers and users, addressing scenarios where the number of popular files can be either less or more than the number of users within the network. Leveraging a recently proposed modified coded caching scheme, combined with a zero-padding technique, we present novel results on data rates, supported by an illustrative example. Our contribution extends to the formulation of two distinct coded schemes within the hierarchical scenario. Furthermore, we optimize the proportions of files and memories allocated to each scheme, facilitating data transfer efficiency and then derive the lower bound for the total rate. Moreover, we demonstrate that the total rate achieved by the proposed heterogeneous approach is lower than that of a homogeneous network with caches equivalent to the minimum cache size present in the homogeneous network. However, it is more than a homogeneous network with the similar average cache size. In addition, we illustrate by proper selection of the proportions of files and memories allocated to each scheme, we can decrease the performance degradation due to the heterogeneity of the network.













Similar content being viewed by others
References
Cisco Annual Internet Report, White Paper 2018–2023.
Maddah-Ali, M. A., & Niesen, U. (2014). Fundamental limits of caching. IEEE Transactions on Information Theory, 60(5), 2856–2867. https://doi.org/10.1109/TIT.2014.2306938
Maddah-Ali, M. A., & Niesen, U. (2015). Decentralized coded caching attains order-optimal memory-rate tradeoff. IEEE/ACM Transactions on Networking, 23(4), 1029–1040. https://doi.org/10.1109/TNET.2014.2317316
Karamchandani, N., Niesen, U., Maddah-Ali, M. A., & Diggavi, S. N. (2016). Hierarchical coded caching. IEEE Transactions on Information Theory, 62(6), 3212–3229. https://doi.org/10.1109/TIT.2016.2557804
Wang, S., Li, W., Tian, X., & Liu, H. (2015). Coded caching with heterogeneous cache sizes. arXiv preprint arXiv:1504.01123
Yu, Q., Maddah-Ali, M. A., & Salman Avestimehr, A. (2018). The exact rate-memory tradeoff for caching with uncoded prefetching. IEEE Transactions on Information Theory, 64(2), 1281–1296. https://doi.org/10.1109/TIT.2017.2785237
Niesen, U., & Maddah-Ali, M. A. (2017). Coded caching with nonuniform demands. IEEE Transactions on Information Theory., 63(2), 1146–1158. https://doi.org/10.1109/TIT.2016.2639522
Saberali, S. A., Lampe, L., & Blake, I. F. (2019). Decentralized coded caching without file splitting. IEEE Transactions on Wireless Communications, 18(2), 1289–1303. https://doi.org/10.1109/TWC.2019.2891618
Destounis, A., Ghorbel, A., Paschos, G. S., & Kobayashi, M. (2020). Adaptive coded caching for fair delivery over fading channels. IEEE Transactions on Information Theory, 66(7), 4530–4546. https://doi.org/10.1109/TIT.2020.2998104
Peter, E., & Sundar Rajan, E. B. (2021). Decentralized and online coded caching with shared caches: fundamental limits with uncoded prefetching. arXiv preprint arXiv:2101.09572v1
Zhang, L., Wang, Z., Xiao, M., Wu, G., Liang, Y., & Li, Sh. (2018). Decentralized caching scheme and performance limits in two-layer networks. IEEE Transactions Vehicular Technology, 67(12), 12177–12192. https://doi.org/10.1109/TVT.2018.2873723
Javadi, E., Zeinalpour-Yazdi, Z., & Parvaresh, F. (2019). Decentralized hierarchical coded caching over heterogeneous wireless networks with multi-level popularity content. Wireless Personal Communications. https://doi.org/10.1007/s11277-019-06369-z
Takita, M., Hirotomo, M., & Morii, M. (2018). Coded caching for hierarchical networks with a different number of layers, IEICE TRANSACTIONS on Fundamentals of Electronics Communications and Computer Sciences, pp. 1745-1337. https://doi.org/10.1587/transfun.E101.A.2037
Wang, K., Wu, Y., Chen, J., & Yin, H. (2019). Reduce transmission delay for caching-aided two-layer networks. In 2019 IEEE International Symposium on Information Theory (ISIT). https://doi.org/10.1109/ISIT.2019.8849624
Kong, Y., Wu, Y., & Cheng, M. (2022). Centralized hierarchical coded caching scheme over two-layer networks. arXiv preprint arXiv:2205.00233.
Kong, Y., Wu, Y., & Cheng, M. (2022). Hierarchical cache-aided linear function retrieval with security and privacy constraints. arXiv preprint arXiv:2209.03633.
Nikjoo, F., Mirzaei, A., & Mohajer, A. (2018). A novel approach to efficient resource allocation in NOMA heterogeneous networks: multi-criteria green resource management. Applied Artificial Intelligence, 32(7–8), 583–612.
Mohajer, A., Sorouri, F., Mirzaei, A., Ziaeddini, A., Jalali Rad, K., & Bavaghar, M. (2022). Energy-aware hierarchical resource management and backhaul traffic optimization in heterogeneous cellular networks. IEEE Systems Journal, 16(4), 5188–5199.
Mohajer, A., Sam Daliri, M., Mirzaei, A., Ziaeddini, A., Nabipour, M., & Bavaghar, M. (2023). Heterogeneous computational resource allocation for NOMA: Toward green mobile edge-computing systems. IEEE Transactions on Services Computing, 16(2), 1225–1238.
Ibrahim, A. M., Zewail, A. A., & Yener, A. (2019). Coded caching for heterogeneous systems: an optimization perspective. IEEE Transactions on Communications, 6(8), 5321–5335. https://doi.org/10.1109/TCOMM.2019.2914393
Ibrahim, A. M., Zewail, A. A., & Yener, A. (2018). Benefits of Coded Placement for Networks with Heterogeneous Cache Sizes. In 2018 52nd Asilomar Conference on Signals, Systems, and Computers. https://doi.org/10.1109/ACSSC.2018.8645503
Cao, D., Zhang, D., Chen, P., Liu, N., Kang, W., & Gunduz, D. (2018). Coded caching with heterogeneous cache sizes and link qualities: the two-user case. In 2018 IEEE International Symposium on Information Theory (ISIT), pp. 2157-8117. https://doi.org/10.1109/ISIT.2018.8437635
Mohammadi Amiri, M., Yang, Q., & Gündüz, D. (2017). Decentralized caching and coded delivery with distinct cache capacities. IEEE Transactions on Communications, 65(11), 4657–4669. https://doi.org/10.1109/TCOMM.2017.2734767
Wang, Q., Cui, Y., Jin, S., Zou, J., Li, C. H., & Xiong, H. (2020). Optimization-based decentralized coded caching for files and caches with arbitrary sizes. IEEE Transactions Communications, 68(4), 2090–2105. https://doi.org/10.1109/TCOMM.2019.2963031
Sengupta, A., Tandon, R., & Charles Clanc, T. (2017). Layered caching for heterogeneous storage. In 2016 50th Asilomar Conference on Signals, Systems and Computers.https://doi.org/10.1109/ACSSC.2016.7869139
Bakhshzad Mahmoodi, H., Kaleva, J., Shariatpanahi, S. P., & Tolli, A. (2023). D2D assisted multi-antenna coded caching. IEEE Access, 11(22646703), 16271–16287. https://doi.org/10.1109/ACCESS.2023.3245882
Bakhshzad Mahmoodi, H., Salehi, M. J., & Tolli, A. (2023). Multi-antenna Coded Caching for Location-Dependent Content Delivery. In IEEE Transactions on Wireless Communications, pp. 1-1. https://doi.org/10.1109/TWC.2023.3277983
Bakhshzad Mahmoodi, H., Salehi, M. J., & Tolli, A. (2023). Low-complexity multi-antenna coded caching using location-aware placement delivery arrays. arXiv:2305.06858
Madhusudan, S., Madapatha, Ch., Makki, B., Guo, H. & Svensson, T. (2023). Beamforming in wireless coded-caching systems. arXiv:2309.05276
Chen, Z., Fan, P., & Letaief, K. B. (2015). Fundamental limits of caching: Improved bounds for small buffer users. arXiv:1407.1935
Wan, K., Tuninetti, D., & Piantanida, P. (2016). On caching with more users than files. In 2016 IEEE International Symposium on Information Theory (ISIT), pp. 2157-8117. https://doi.org/10.1109/ISIT.2016.7541276
Mohammadi Amiri, M., Yang, Q., & Gündüz, D. (2016). Coded caching for a large number of users, In 2016 IEEE Information Theory Workshop (ITW), Oct. https://doi.org/10.1109/ITW.2016.7606818
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendices
1.1 Appendix A: Proof of Theorem 1
In this section, we derive the approximate optimum value of \(\alpha \) and \(\beta \) to minimize \(R_T = R_1+k_1 R_2\). Consider two cases.
A. Case \(N\ge k_1 k_2\): In \(R_T\), we can ignore \(R_1^A\) and \(R_1^B\) compare \( k_1 R_2\) because \( k_1 R_2\) has the coefficient of \( k_1\) and for \(R_1^A\) which has the coefficient of \(\alpha k_2\), we have
and
This is because the memory sizes of the helpers are much more than the memory sizes of the users. So we have \( \min {R_T} \approx \min k_1R_2. \)
By setting \(\alpha \) = \(\beta \) in \(\frac{\partial R_2(\alpha ,\beta )}{\partial \beta }\), we have \(\frac{\partial R_2(\alpha ,\beta )}{\partial \beta } =0\) and by setting \(\alpha \) = \(\beta \) in \(\frac{\partial ^2 R_2(\alpha ,\beta )}{\partial \beta ^2}\), we have \( \frac{\partial ^2 R_2(\alpha ,\beta )}{\partial \beta ^2} \ge 0. \) By setting \(\alpha = \beta \) in \(R_T = R_1+ k_1R_2\), we have
Notice \(R_{T,3}\) is constant, so we calculate \(\min ( R_{T,1}(\alpha ) + R_{T,2}(\alpha )).\) Values of \( R_{T,1}(\alpha )\) are zero in the interval of \(\alpha \in [0, \frac{M_H^1}{N}]\) and then increases to \(k_2 \sum _{i=1}^{k_1} \prod _{j=1}^{i} (1-\frac{M_H^j}{N})\). On the other hand, \( R_{T,2}(\alpha )\) decreases linearly, so in the interval of \(\alpha \in [0, \frac{M_H^1}{N}]\), the minimum of \( R_{T,1}(\alpha ) + R_{T,2}(\alpha )\) occurs in the point \(\alpha =\frac{M_H^1}{N}\). For the interval of \(\alpha \in [\frac{M_H^1}{N}, 1]\), the linear approximation of \( R_{T,1}(\alpha ) + R_{T,2}(\alpha )\) can be written and the gradient of the line or
is positive, so in the interval of \(\alpha \in [\frac{M_H^1}{N}, 1]\), we have an increasing function which has the minimum value in \(\frac{M_H^1}{N}\).
B. Case \(N<k_1 k_2\): Among different parts, \(R_1^A\) and \( k_1 R_2\) have the factor of \( \alpha N \) and \( k_1 \), respectively, so we can ignore \(R_1^B\) compared to them. On the other hand, \(R_1^A\) includes \(\sum _{i=1}^{N} \prod _{j=1}^{i} (1-\frac{M_H^j}{\alpha N})\) and \(k_1 R_2\) includes \(\sum _{i=1}^{N} \prod _{j=1}^{i} (1-\frac{\beta M_U^j}{\alpha N})\) or \(\sum _{i=1}^{N} \prod _{j=1}^{i} (1-\frac{(1-\beta )M_U^j}{(1-\alpha )N})\). We know that the memory sizes of the helpers are much more than the memory sizes of the users. So
and
Hence, we can ignore \(R_1^A\) compared to \(k_1 R_2\) and \(\min {(R_T)} \approx \min ({k_1 R_2}).\) Setting \(\alpha = \beta \) in \(\frac{\partial R_2(\alpha ,\beta )}{\partial \beta }\), we have \(\frac{\partial R_2(\alpha ,\beta )}{\partial \beta } =0, \) and setting \(\alpha \) = \(\beta \) in \(\frac{\partial ^2 R_2(\alpha ,\beta )}{\partial \beta ^2}\), we have \( \frac{\partial ^2 R_2(\alpha ,\beta )}{\partial \beta ^2} \ge 0. \) By setting \(\alpha =\beta \) in \(R_T \) and setting \(\alpha =\frac{M_H^1}{N}\) in \(\frac{\partial R_T(\alpha )}{\partial \alpha } \), we have
Because a large value of N, \(\sum _{i=1}^{N} \prod _{j=1}^{i} (1-\frac{M_U^j}{N})\) tends to be N, so we have \(\frac{\partial R_T(\alpha )}{\partial \alpha }= 0.\)
1.2 Appendix B: Proof of Theorem 2
In this section, we derive the information-theoretic lower bound on \(R_T\) under the heterogeneous cache sizes which is independent of any specific scheme.
In the first layer, the total memories of the helpers, the total memories of the users, and the total information signals which are transmitted by the server must be at least equal to the size of different files which are reconstructed by the users. Let \(s_1 \in \{1,2,....., \min \{N,k_1\}\}\), \(s_2 \in \{1,2,....., min\{N,k_2\}\}\) and the set of users (i, j) which \(i \in \{1,2,....., s_1\}\) and \(j \in \{1,2,....., s_2\}\). Then we have the following cut-set bound for \(R_1\):
so we have
which can be rewritten as
We go through the same way for the rate between the helpers and their users. We have
so we have
and finally
Now, by obtaining the lower bounds for \(R^*_1\) and \(R^*_2\), the lower bound for \(R_T = R_1+k_1 R_2\) is derived as the above cut-set bound.
1.3 Appendix C: Comparing the rate of the proposed heterogeneous method and the rate of a homogeneous model with minimum cache size
The first rate between the server and the helpers when \( N \ge k_1 k_2\) is
If the memory sizes of all helpers are equivalent and equal to the minimum size, and also the memory sizes of all users are equivalent and equal to the minimum size, we have:
Taking into account the summation of the geometric progression, we obtain:
which is the rate of the first layer in the homogeneous cache size scheme presented in [4]. On the other hand, we assume that \(M_H^1< M_H^2<... < M_H^{k_1}\), so we have \((1-\frac{M_H^1}{\alpha N})> (1-\frac{M_H^2}{\alpha N}) > . . . (1-\frac{M_H^{k_1}}{\alpha N}).\) As \((1-\frac{M_H^i}{\alpha N}) < 0\), for each i we have \( (1-\frac{M_H^1}{\alpha N})^i > \prod _{j=1}^{i} (1-\frac{M_H^i}{\alpha N}),\) so
which leads to
With the same argument, we have \(R_1^{Hom-B}(\alpha ,\beta ) > R_1^{Het-B}(\alpha ,\beta )\) and therefore
Similar to the above argument, we have
Now since \(R_T(\alpha ,\beta ) = R_1(\alpha ,\beta )+ k_1 R_2(\alpha ,\beta )\), we have
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Javadi, E., Zeinalpour-Yazdi, Z. & Parvaresh, F. Hierarchical coded caching with heterogeneous cache sizes. Wireless Netw 30, 2001–2016 (2024). https://doi.org/10.1007/s11276-023-03620-1
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11276-023-03620-1