基于机器学习和缓存的高效数据检索框架

Shashwati Mishra, Rahul Bajpai, Naveen Gupta, Vibhutesh Kumar Singh
{"title":"基于机器学习和缓存的高效数据检索框架","authors":"Shashwati Mishra, Rahul Bajpai, Naveen Gupta, Vibhutesh Kumar Singh","doi":"10.1109/ANTS50601.2020.9342790","DOIUrl":null,"url":null,"abstract":"The explosive growth of wireless data and traffic, accompanied by the rapid advancements in intelligence and the processing power of user equipments (UEs), poses a very difficult challenge to the data providers to maintain the high data rate with sustainable quality-of-service (QoS). A lot of data can be saved by using caching based communication techniques, which would save the service providers a fortune and will make internet connectivity even more affordable. Also, there is room for saving bandwidth and using the limited number of servers and towers efficiently outputting a steadily healthy QoS. We propose an efficient data retrieval framework that uses caching based on the popularity of the pages where the popularity of pages is determined by the number of hits it gets over a month, which is the learning phase of the model and how frequently a given web page is requested. The proposed framework uses a causal decision tree in the background to determine the popularity of pages according to which the algorithm decides whether a given page is worthy of being cached or not. Results show that our proposed model outperforms the conventional data retrieval models in terms of cache missed probability.","PeriodicalId":426651,"journal":{"name":"2020 IEEE International Conference on Advanced Networks and Telecommunications Systems (ANTS)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Machine Learning and Caching based Efficient Data Retrieval Framework\",\"authors\":\"Shashwati Mishra, Rahul Bajpai, Naveen Gupta, Vibhutesh Kumar Singh\",\"doi\":\"10.1109/ANTS50601.2020.9342790\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The explosive growth of wireless data and traffic, accompanied by the rapid advancements in intelligence and the processing power of user equipments (UEs), poses a very difficult challenge to the data providers to maintain the high data rate with sustainable quality-of-service (QoS). A lot of data can be saved by using caching based communication techniques, which would save the service providers a fortune and will make internet connectivity even more affordable. Also, there is room for saving bandwidth and using the limited number of servers and towers efficiently outputting a steadily healthy QoS. We propose an efficient data retrieval framework that uses caching based on the popularity of the pages where the popularity of pages is determined by the number of hits it gets over a month, which is the learning phase of the model and how frequently a given web page is requested. The proposed framework uses a causal decision tree in the background to determine the popularity of pages according to which the algorithm decides whether a given page is worthy of being cached or not. Results show that our proposed model outperforms the conventional data retrieval models in terms of cache missed probability.\",\"PeriodicalId\":426651,\"journal\":{\"name\":\"2020 IEEE International Conference on Advanced Networks and Telecommunications Systems (ANTS)\",\"volume\":\"44 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-12-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 IEEE International Conference on Advanced Networks and Telecommunications Systems (ANTS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ANTS50601.2020.9342790\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE International Conference on Advanced Networks and Telecommunications Systems (ANTS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ANTS50601.2020.9342790","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

无线数据和流量的爆炸式增长,伴随着智能和用户设备处理能力的快速发展,给数据提供商在保持高数据速率的同时保持持续的服务质量(QoS)提出了非常艰巨的挑战。通过使用基于缓存的通信技术可以节省大量数据,这将为服务提供商节省一大笔钱,并将使互联网连接更加实惠。此外,还有节省带宽和使用有限数量的服务器和发射塔有效输出稳定健康的QoS的空间。我们提出了一个高效的数据检索框架,该框架基于页面的受欢迎程度使用缓存,其中页面的受欢迎程度取决于它在一个月内获得的点击次数,这是模型的学习阶段,以及给定网页被请求的频率。该框架在后台使用因果决策树来确定页面的受欢迎程度,算法根据该树来决定给定页面是否值得缓存。结果表明,该模型在缓存缺失概率方面优于传统的数据检索模型。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Machine Learning and Caching based Efficient Data Retrieval Framework
The explosive growth of wireless data and traffic, accompanied by the rapid advancements in intelligence and the processing power of user equipments (UEs), poses a very difficult challenge to the data providers to maintain the high data rate with sustainable quality-of-service (QoS). A lot of data can be saved by using caching based communication techniques, which would save the service providers a fortune and will make internet connectivity even more affordable. Also, there is room for saving bandwidth and using the limited number of servers and towers efficiently outputting a steadily healthy QoS. We propose an efficient data retrieval framework that uses caching based on the popularity of the pages where the popularity of pages is determined by the number of hits it gets over a month, which is the learning phase of the model and how frequently a given web page is requested. The proposed framework uses a causal decision tree in the background to determine the popularity of pages according to which the algorithm decides whether a given page is worthy of being cached or not. Results show that our proposed model outperforms the conventional data retrieval models in terms of cache missed probability.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信