命名数据网络中的缓存与机器学习集成方法综述

R. Negara, Nana Rachmana Syambas
{"title":"命名数据网络中的缓存与机器学习集成方法综述","authors":"R. Negara, Nana Rachmana Syambas","doi":"10.1109/TSSA51342.2020.9310811","DOIUrl":null,"url":null,"abstract":"The caching mechanism is an essential part of future network design because it can improve the Quality of Experience (QoE) for users. Therefore, recent studies have examined the most appropriate caching techniques for future networks. Named Data Networks (NDN) is a future data-centric network that uses a cache mechanism to store packets of data in content stores. The main problem of traditional caching techniques cannot transmit large data packets, which high speed and changing depending on customers' requests. Undoubtedly, Machine Learning (ML) and deep learning (DL) algorithms play essential roles in many fields. Recent research adds ML or DL functions to cache decisions, such as cache replacement, content selection based on popularity, and cache placement. This paper performs an in-depth review of integration methods of caching and ML algorithms in future networks. The aim is to understand the goals, contributions, selection of learning algorithms, network topology, caching strategies, and their impact on improving network performance. This paper divides caching techniques into four categories to help readers understand the opportunities of the caching method. Furthermore, we discuss how a joint optimization strategy using ML and DL greatly impacted the network.","PeriodicalId":166316,"journal":{"name":"2020 14th International Conference on Telecommunication Systems, Services, and Applications (TSSA","volume":"209 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":"{\"title\":\"Caching and Machine Learning Integration Methods on Named Data Network: a Survey\",\"authors\":\"R. Negara, Nana Rachmana Syambas\",\"doi\":\"10.1109/TSSA51342.2020.9310811\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The caching mechanism is an essential part of future network design because it can improve the Quality of Experience (QoE) for users. Therefore, recent studies have examined the most appropriate caching techniques for future networks. Named Data Networks (NDN) is a future data-centric network that uses a cache mechanism to store packets of data in content stores. The main problem of traditional caching techniques cannot transmit large data packets, which high speed and changing depending on customers' requests. Undoubtedly, Machine Learning (ML) and deep learning (DL) algorithms play essential roles in many fields. Recent research adds ML or DL functions to cache decisions, such as cache replacement, content selection based on popularity, and cache placement. This paper performs an in-depth review of integration methods of caching and ML algorithms in future networks. The aim is to understand the goals, contributions, selection of learning algorithms, network topology, caching strategies, and their impact on improving network performance. This paper divides caching techniques into four categories to help readers understand the opportunities of the caching method. Furthermore, we discuss how a joint optimization strategy using ML and DL greatly impacted the network.\",\"PeriodicalId\":166316,\"journal\":{\"name\":\"2020 14th International Conference on Telecommunication Systems, Services, and Applications (TSSA\",\"volume\":\"209 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-11-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"10\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 14th International Conference on Telecommunication Systems, Services, and Applications (TSSA\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/TSSA51342.2020.9310811\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 14th International Conference on Telecommunication Systems, Services, and Applications (TSSA","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TSSA51342.2020.9310811","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 10

摘要

缓存机制是未来网络设计的重要组成部分,因为它可以提高用户的体验质量(QoE)。因此,最近的研究已经为未来的网络检查了最合适的缓存技术。命名数据网络(NDN)是未来以数据为中心的网络,它使用缓存机制将数据包存储在内容存储库中。传统缓存技术的主要问题是不能传输大的数据包,传输速度快,而且会根据客户的要求而变化。毫无疑问,机器学习(ML)和深度学习(DL)算法在许多领域发挥着至关重要的作用。最近的研究将ML或DL功能添加到缓存决策中,例如缓存替换、基于流行度的内容选择和缓存放置。本文对未来网络中缓存和ML算法的集成方法进行了深入的回顾。目的是了解目标、贡献、学习算法的选择、网络拓扑、缓存策略,以及它们对提高网络性能的影响。本文将缓存技术分为四类,以帮助读者理解缓存方法的机会。此外,我们讨论了使用ML和DL的联合优化策略如何极大地影响网络。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Caching and Machine Learning Integration Methods on Named Data Network: a Survey
The caching mechanism is an essential part of future network design because it can improve the Quality of Experience (QoE) for users. Therefore, recent studies have examined the most appropriate caching techniques for future networks. Named Data Networks (NDN) is a future data-centric network that uses a cache mechanism to store packets of data in content stores. The main problem of traditional caching techniques cannot transmit large data packets, which high speed and changing depending on customers' requests. Undoubtedly, Machine Learning (ML) and deep learning (DL) algorithms play essential roles in many fields. Recent research adds ML or DL functions to cache decisions, such as cache replacement, content selection based on popularity, and cache placement. This paper performs an in-depth review of integration methods of caching and ML algorithms in future networks. The aim is to understand the goals, contributions, selection of learning algorithms, network topology, caching strategies, and their impact on improving network performance. This paper divides caching techniques into four categories to help readers understand the opportunities of the caching method. Furthermore, we discuss how a joint optimization strategy using ML and DL greatly impacted the network.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信