Sang-Woon Jeon, Songnam Hong, Mingyue Ji, G. Caire, A. Molisch
{"title":"Wireless Multihop Device-to-Device Caching Networks","authors":"Sang-Woon Jeon, Songnam Hong, Mingyue Ji, G. Caire, A. Molisch","doi":"10.1109/ICC.2015.7249398","DOIUrl":null,"url":null,"abstract":"We consider a wireless device-to-device network, where $n$ nodes are uniformly distributed at random over the network area. We let each node caches $M$ files from a library of size $m\\geq M$ . Each node in the network requests a file from the library independently at random, according to a popularity distribution, and is served by other nodes having the requested file in their local cache via (possibly) multihop transmissions. Under the classical “protocol model” of wireless networks, we characterize the optimal per-node capacity scaling law for a broad class of heavy-tailed popularity distributions, including Zipf distributions with exponent less than one. In the parameter regime of interest, i.e., $m=o(nM)$ , we show that a decentralized random caching strategy with uniform probability over the library yields the optimal per-node capacity scaling of $\\Theta (\\sqrt {M/m})$ for heavy-tailed popularity distributions. This scaling is constant with $n$ , thus yielding throughput scalability with the network size. Furthermore, the multihop capacity scaling can be significantly better than for the case of single-hop caching networks, for which the per-node capacity is $\\Theta (M/m)$ . The multihop capacity scaling law can be further improved for a Zipf distribution with exponent larger than some threshold > 1, by using a decentralized random caching uniformly across a subset of most popular files in the library. Namely, ignoring a subset of less popular files (i.e., effectively reducing the size of the library) can significantly improve the throughput scaling while guaranteeing that all nodes will be served with high probability as $n$ increases.","PeriodicalId":13250,"journal":{"name":"IEEE Trans. Inf. Theory","volume":"14 1","pages":"1662-1676"},"PeriodicalIF":0.0000,"publicationDate":"2015-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"41","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Trans. Inf. Theory","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICC.2015.7249398","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 41
Abstract
We consider a wireless device-to-device network, where $n$ nodes are uniformly distributed at random over the network area. We let each node caches $M$ files from a library of size $m\geq M$ . Each node in the network requests a file from the library independently at random, according to a popularity distribution, and is served by other nodes having the requested file in their local cache via (possibly) multihop transmissions. Under the classical “protocol model” of wireless networks, we characterize the optimal per-node capacity scaling law for a broad class of heavy-tailed popularity distributions, including Zipf distributions with exponent less than one. In the parameter regime of interest, i.e., $m=o(nM)$ , we show that a decentralized random caching strategy with uniform probability over the library yields the optimal per-node capacity scaling of $\Theta (\sqrt {M/m})$ for heavy-tailed popularity distributions. This scaling is constant with $n$ , thus yielding throughput scalability with the network size. Furthermore, the multihop capacity scaling can be significantly better than for the case of single-hop caching networks, for which the per-node capacity is $\Theta (M/m)$ . The multihop capacity scaling law can be further improved for a Zipf distribution with exponent larger than some threshold > 1, by using a decentralized random caching uniformly across a subset of most popular files in the library. Namely, ignoring a subset of less popular files (i.e., effectively reducing the size of the library) can significantly improve the throughput scaling while guaranteeing that all nodes will be served with high probability as $n$ increases.