Computer Networks最新文献

筛选
英文 中文
TPE-BFL: Training Parameter Encryption scheme for Blockchain based Federated Learning system TPE-BFL:基于区块链的联邦学习系统的训练参数加密方案
IF 4.4 2区 计算机科学
Computer Networks Pub Date : 2024-08-05 DOI: 10.1016/j.comnet.2024.110691
{"title":"TPE-BFL: Training Parameter Encryption scheme for Blockchain based Federated Learning system","authors":"","doi":"10.1016/j.comnet.2024.110691","DOIUrl":"10.1016/j.comnet.2024.110691","url":null,"abstract":"<div><p>Blockchain technology plays a pivotal role in addressing the single point of failure issues in federated learning systems, due to the immutable nature and decentralized architecture. However, traditional blockchain-based federated learning systems still face privacy and security challenges when transmitting training model parameters to individual nodes. Malicious nodes within the system can exploit this process to steal parameters and extract sensitive information, leading to data leakage. To address this problem, we propose a Training Parameter Encryption scheme for Blockchain based Federated Learning system (TPE-BFL). In TPE-BFL, the training parameters of the system model are encrypted using the paillier algorithm with the property of addition homomorphism. This encryption mechanism is integrated into the workflows of three distinct roles within the system: workers, validators, and miners. (1) Workers utilize the paillier encryption algorithm to encrypt training parameters for local training models. (2) Validators decrypt received encrypted training parameters using private keys to verify their validity. (3) Miners receive cryptographic training parameters from validators, validate them, and generate blocks for subsequent global model updates. By implementing the TPE-BFL mechanism, we not only preserve the immutability and decentralization advantages of blockchain technology but also significantly enhance the privacy protection capabilities during data transmission in federated learning systems. In order to verify the security of TPE-BFL, we leverage the semantic security inherent in the Paillier encryption algorithm to theoretically substantiate the security of our system. In addition, we conducted a large number of experiments on real-world data to prove the validity of our proposed TPE-BFL, and when 15% of malicious devices are present, TPE-BFL achieve 92% model accuracy, a 5% improvement over the blockchain-based decentralized FL framework (VBFL).</p></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141953837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Joint optimization of application placement and resource allocation for enhanced performance in heterogeneous multi-server systems 联合优化应用安置和资源分配,提高异构多服务器系统性能
IF 4.4 2区 计算机科学
Computer Networks Pub Date : 2024-08-05 DOI: 10.1016/j.comnet.2024.110692
{"title":"Joint optimization of application placement and resource allocation for enhanced performance in heterogeneous multi-server systems","authors":"","doi":"10.1016/j.comnet.2024.110692","DOIUrl":"10.1016/j.comnet.2024.110692","url":null,"abstract":"<div><p>Efficiently placing applications remains a critical challenge across diverse multi-server environments, including web hosting centers, cloud computing, and edge computing environments. Unfortunately, most existing studies tend to overlook the crucial aspect of resource allocation, leading to suboptimal system performance. To address this gap, there is a pressing need to holistically explore both application placement and resource allocation in a unified manner. In this paper, we introduce the place and allocate problem in heterogeneous multi-server systems, a novel approach aiming at simultaneously optimizing the placement and allocation of applications to maximize the overall utility of the heterogeneous multi-server system. Our proposed methodology harnesses the interplay between application placement and resource allocation, showcasing substantial improvements in system utility. To model individual application performance concerning their allocated resources, we employ utility functions. For concave utility functions, we present an approximation algorithm that operates efficiently with a time complexity of <span><math><mrow><mi>O</mi><mrow><mo>(</mo><mi>m</mi><msup><mrow><mi>n</mi></mrow><mrow><mn>3</mn></mrow></msup><msup><mrow><mrow><mo>(</mo><mi>l</mi><mi>o</mi><mi>g</mi><mi>C</mi><mo>)</mo></mrow></mrow><mrow><mn>2</mn></mrow></msup><mo>)</mo></mrow></mrow></math></span>, where <span><math><mi>n</mi></math></span> represents the number of applications, <span><math><mi>m</mi></math></span> is the number of servers, and <span><math><mi>C</mi></math></span> denotes the maximum available resource capacity of each server. Furthermore, we extend our approach to accommodate more general scenarios that involve applications with nonconcave utility functions and using multiple types of resources. Our study includes comprehensive experimental evaluations conducted on applications with both synthetic and real-world utility functions. Results consistently showcase that our algorithms achieve over 96.9% of optimal performance on average. Additionally, comparative analysis against several practical heuristics reveal that our algorithms outperform these methods by up to 4.3 times in total utility.</p></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141978842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Impact of network topology on the performance of Decentralized Federated Learning 网络拓扑结构对分散式联合学习性能的影响
IF 4.4 2区 计算机科学
Computer Networks Pub Date : 2024-08-05 DOI: 10.1016/j.comnet.2024.110681
{"title":"Impact of network topology on the performance of Decentralized Federated Learning","authors":"","doi":"10.1016/j.comnet.2024.110681","DOIUrl":"10.1016/j.comnet.2024.110681","url":null,"abstract":"<div><p>Fully decentralized learning is gaining momentum for training AI models at the Internet’s edge, addressing infrastructure challenges and privacy concerns. In a decentralized machine learning system, data is distributed across multiple nodes, with each node training a local model based on its respective dataset. The local models are then shared and combined to form a global model capable of making accurate predictions on new data. Our exploration focuses on how different types of network structures influence the spreading of knowledge – the process by which nodes incorporate insights gained from learning patterns in data available on other nodes across the network. Specifically, this study investigates the intricate interplay between network structure and learning performance using three network topologies and six data distribution methods. These methods consider different vertex properties, including degree centrality, betweenness centrality, and clustering coefficient, along with whether nodes exhibit high or low values of these metrics. Our findings underscore the significance of global centrality metrics (degree, betweenness) in correlating with learning performance, while local clustering proves less predictive. We highlight the challenges in transferring knowledge from peripheral to central nodes, attributed to a dilution effect during model aggregation. Additionally, we observe that central nodes exert a pull effect, facilitating the spread of knowledge. In examining degree distribution, hubs in Barabási–Albert networks positively impact learning for central nodes but exacerbate dilution when knowledge originates from peripheral nodes. Finally, we demonstrate the formidable challenge of knowledge circulation outside of segregated communities, and discuss the impact of class cross-correlations.</p></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1389128624005139/pdfft?md5=c3f5bf876ac97593e2027a2893084b77&pid=1-s2.0-S1389128624005139-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142044382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advancing TSN flow scheduling: An efficient framework without flow isolation constraint 推进 TSN 流量调度:无流量隔离约束的高效框架
IF 4.4 2区 计算机科学
Computer Networks Pub Date : 2024-08-03 DOI: 10.1016/j.comnet.2024.110688
{"title":"Advancing TSN flow scheduling: An efficient framework without flow isolation constraint","authors":"","doi":"10.1016/j.comnet.2024.110688","DOIUrl":"10.1016/j.comnet.2024.110688","url":null,"abstract":"<div><p>In the domain of Time-Sensitive Networking (TSN), the quest for ultra-reliable low-latency communication is paramount. Current scheduling strategies, which hinge on strict isolation to ensure low latency and jitter, confront the challenges of high overhead in worst-case latency evaluation and consequent limitations in network flow capacity. This paper introduces an innovative framework that transcends traditional isolation constraints, thereby expanding the solution space and augmenting network schedulability. At the heart of this framework lies a novel latency jitter analysis method that assesses the viability of non-isolation scenarios with constant time complexity. This method underpins a heuristic scheduling algorithm that not only boasts the smallest time complexity among existing heuristics but also significantly increases the number of scheduled flows. Complementing this, we integrate a discrete time reference approach to hasten time-intensive scheduling operations, achieving an optimal balance between schedulability and runtime efficiency. The framework further incorporates a workload-shifting technique to enhance online scheduling responsiveness. It adeptly manages the variability in scheduling times caused by disharmonious flow periods, further bolstering the framework’s robustness. Experimental validations demonstrate that our framework can increase the scheduled flows up to 269%. It reduces scheduling runtime by up to 98.44% for medium-scale networks while maintaining a flat runtime growth curve, ensuring predictable performance in online scheduling scenarios.</p></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2024-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1389128624005206/pdfft?md5=8a864eff61b93d517e72680e57828270&pid=1-s2.0-S1389128624005206-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141936963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LEO-based network-centric localization in 6G: Challenges and future perspectives 6G 中基于低地轨道的网络中心定位:挑战与未来展望
IF 4.4 2区 计算机科学
Computer Networks Pub Date : 2024-08-03 DOI: 10.1016/j.comnet.2024.110689
{"title":"LEO-based network-centric localization in 6G: Challenges and future perspectives","authors":"","doi":"10.1016/j.comnet.2024.110689","DOIUrl":"10.1016/j.comnet.2024.110689","url":null,"abstract":"<div><p>The future releases of 3rd Generation Partnership Project (3GPP) specifications, that is, beyond release 18, will consider the possibility to localize the User Equipment (UE) at network-side, eventually using satellite constellations of the integrated Terrestrial-Non-Terrestrial networks (T-NTN). Satellite network-centric localization schemes can be categorized into single- and multi-satellite localization using spatio-temporal measurements or instantaneous spatial diversity, respectively Direct channel measurements such as Doppler, Received Signal Strength (RSS), Round Trip Time (RTT) and Angle-of-Arrival (AoA) or differential measurements such as Time-Difference-of-Arrival (TDoA) and Frequency-Difference-of-Arrival (FDoA) have been considered in the literature to aid the localization operation. This paper focuses on the applicability of an RTT approach, which has some advantages with respect to the other approaches in case of satellite network-centric localization in the integrated T-NTN. The paper shows some preliminary results of the proposed RTT approach. Finally, challenges and research trends of this novel research field have been highlighted.</p></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2024-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142021166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rethinking the mobile edge for vehicular services 重新思考车辆服务的移动边缘
IF 4.4 2区 计算机科学
Computer Networks Pub Date : 2024-08-02 DOI: 10.1016/j.comnet.2024.110687
{"title":"Rethinking the mobile edge for vehicular services","authors":"","doi":"10.1016/j.comnet.2024.110687","DOIUrl":"10.1016/j.comnet.2024.110687","url":null,"abstract":"<div><p>The growing connected car market requires mobile network operators (MNOs) to rethink their network architecture to deliver ultra-reliable low-latency communications. In response, Multi-Access Edge Computing (MEC) has emerged as a solution, enabling the deployment of computing resources at the network edge. For MNOs to tap into the potential benefits of MEC, they need to transform their networks accordingly. Consequently, the primary objective of this study is to design a realistic MEC architecture and corresponding optimal <em>deployment</em> strategy – deciding on the <em>placement</em> and <em>configuration</em> of computing resources – as opposed to prior studies focusing on MEC run-time management and orchestration (e.g., service placement, computation offloading, and user allocation). To cater to the heterogeneous demands of vehicular services, we propose a multi-tier MEC architecture aligned with 5G and Beyond-5G radio access network deployments. Therefore, we frame MEC deployment as an optimization problem within this architecture, assuming 3 MEC tiers. Our data-driven evaluation, grounded in realistic assumptions about network architecture, usage, latency, and cost models, relies on datasets from a major MNO in the UK. We show the benefits of adopting a 3-tier MEC architecture over single-tier (centralized or distributed) architectures for heterogeneous vehicular services, in terms of deployment cost, energy consumption, and robustness.</p></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2024-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S138912862400519X/pdfft?md5=0009a470794f51bd23f900931eb13c8f&pid=1-s2.0-S138912862400519X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141985452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Priv-Share: A privacy-preserving framework for differential and trustless delegation of cyber threat intelligence using blockchain Priv-Share:利用区块链实现网络威胁情报差异化和无信任委托的隐私保护框架
IF 4.4 2区 计算机科学
Computer Networks Pub Date : 2024-08-02 DOI: 10.1016/j.comnet.2024.110686
{"title":"Priv-Share: A privacy-preserving framework for differential and trustless delegation of cyber threat intelligence using blockchain","authors":"","doi":"10.1016/j.comnet.2024.110686","DOIUrl":"10.1016/j.comnet.2024.110686","url":null,"abstract":"<div><p>The emergence of the Internet of Things (IoT), Industry 5.0 applications and associated services have caused a powerful transition in the cyber threat landscape. As a result, organisations require new ways to proactively manage the risks associated with their infrastructure. In response, a significant amount of research has focused on developing efficient <em>Cyber Threat Intelligence</em> (CTI) sharing. However, in many cases, CTI contains sensitive information that has the potential to leak valuable information or cause reputational damage to the sharing organisation. While a number of existing CTI sharing approaches have utilised blockchain to facilitate privacy, it can be highlighted that a comprehensive approach that enables dynamic trust-based decision-making, facilitates decentralised trust evaluation and provides CTI producers with highly granular sharing of CTI is lacking. Subsequently, in this paper, we propose a blockchain-based CTI sharing framework, called <em>Priv-Share</em>, as a promising solution towards this challenge. In particular, we highlight that the integration of <em>differential sharing</em>, <em>trustless delegation</em>, <em>democratic group managers</em> and <em>incentives</em> as part of <em>Priv-Share</em> ensures that it can satisfy these criteria. The results of an analytical evaluation of the proposed framework using both queuing and game theory demonstrate its ability to provide scalable CTI sharing in a trustless manner. Moreover, a quantitative evaluation of an Ethereum proof-of-concept prototype demonstrates that applying the proposed framework within real-world contexts is feasible.</p></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2024-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1389128624005188/pdfft?md5=f66aca799d724e317c329989ebfd22dc&pid=1-s2.0-S1389128624005188-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141936965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MSDQ: Multi-Scheduling Dual-Queues coflow scheduling without prior knowledge MSDQ:无先验知识的多调度双队列共流调度
IF 4.4 2区 计算机科学
Computer Networks Pub Date : 2024-08-02 DOI: 10.1016/j.comnet.2024.110685
{"title":"MSDQ: Multi-Scheduling Dual-Queues coflow scheduling without prior knowledge","authors":"","doi":"10.1016/j.comnet.2024.110685","DOIUrl":"10.1016/j.comnet.2024.110685","url":null,"abstract":"<div><p>Coflow scheduling is crucial for enhancing application-level communication performance in data-parallel clusters. While schemes like Varys can potentially achieve optimal performance, their dependence on a prior information about coflows poses practical challenges. Existing non-clairvoyant solutions, such as Aalo, approximate the classical online Shortest-Job-First (SJF) scheduling but fail to identify bottleneck flows in coflows. Consequently, they often allocate excessive bandwidth to non-bottleneck flows, leading to bandwidth wastage and reduced overall performance. In this paper, we introduce MSDQ, a coflow scheduling mechanism that operates without prior knowledge, utilizing multi-scheduling dual-priority queues, and using width estimates. This method adjusts coflow queue priorities and scheduling sequences based on the coflow’s width and the volume of data transmitted. By reallocating unused network bandwidth at multiple points during the scheduling process, MSDQ maximizes the bandwidth usage and significantly reduces the average coflow completion time. Our evaluation, using a publicly available production cluster trace from Facebook, demonstrates that MSDQ reduces the average coflow completion time by <span><math><mrow><mn>1</mn><mo>.</mo><mn>42</mn><mo>×</mo></mrow></math></span> compared to Aalo.</p></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2024-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141953838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Disruptive 6G architecture: Software-centric, AI-driven, and digital market-based mobile networks 颠覆性的 6G 架构:以软件为中心、人工智能驱动、基于数字市场的移动网络
IF 4.4 2区 计算机科学
Computer Networks Pub Date : 2024-07-31 DOI: 10.1016/j.comnet.2024.110682
{"title":"Disruptive 6G architecture: Software-centric, AI-driven, and digital market-based mobile networks","authors":"","doi":"10.1016/j.comnet.2024.110682","DOIUrl":"10.1016/j.comnet.2024.110682","url":null,"abstract":"<div><p>Mobile communications have followed a progression model detailed by the Gartner hype cycle, from a proof-of-concept to widespread productivity. As fifth-generation (5G) mobile networks are being deployed, their potential and constraints are becoming more evident. Although 5G boasts a flexible architecture, enhanced bandwidth, and data throughput, it still grapples with infrastructure challenges, security vulnerabilities, coverage issues, and limitations in fully enabling the Internet of Everything (IoE). As the world experiences exponential growth in Internet users and digitized devices, relying solely on evolutionary technologies seems inadequate. Recognizing this, global entities such as the 3rd Generation Partnership Project (3GPP) are laying the groundwork for 5G Advanced, a precursor to 6G. This article argues against a mere evolutionary leap from 5G to 6G. We propose a radical shift towards a disruptive 6G architecture (D6G) that harnesses the power of smart contracts, decentralized Artificial Intelligence (AI), and digital twins. This novel design offers a software-centric, AI-driven, and digital market-based redefinition of mobile technologies. As a result of an integrated collaboration among researchers from the Brazil 6G Project, this work identifies and synthesizes fifty-one key emerging enablers for 6G, devising a unique and holistic integration framework. Emphasizing flexibility, D6G promotes a digital market environment, allowing seamless resource sharing and solving several of 5G’s current challenges. This article comprehensively explores these enablers, presenting a groundbreaking approach to 6G’s design and implementation and setting the foundation for a more adaptable, autonomous, digitally monitored, and AI-driven mobile communication landscape. Finally, we developed a queuing theory model to evaluate the D6G architecture. Results show that the worst-case delay for deploying a smart contract in a 6G domain was 23 s. Furthermore, under high transaction rates of ten transactions per minute, the delay for contracting a 6G slice was estimated at 53.7 s, demonstrating the architecture’s capability to handle high transaction volumes efficiently.</p></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2024-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141936966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spatio-temporal graph learning: Traffic flow prediction of mobile edge computing in 5G/6G vehicular networks 时空图学习:5G/6G 车辆网络中移动边缘计算的流量预测
IF 4.4 2区 计算机科学
Computer Networks Pub Date : 2024-07-31 DOI: 10.1016/j.comnet.2024.110676
{"title":"Spatio-temporal graph learning: Traffic flow prediction of mobile edge computing in 5G/6G vehicular networks","authors":"","doi":"10.1016/j.comnet.2024.110676","DOIUrl":"10.1016/j.comnet.2024.110676","url":null,"abstract":"<div><p>Mobile Edge Computing (MEC) is a key technology that emerged to address the increasing computational demands and communication requirements of vehicular networks. It is a form of edge computing that brings cloud computing capabilities closer to end-users, specifically within the context of vehicular networks, which are part of the broader Internet of Vehicles (IoV) ecosystem. However, the dynamic nature of traffic flows in MEC in 5G/6G vehicular networks poses challenges for accurate prediction and resource allocation when aiming to provide edge service for mobile vehicles. In this paper, we present a novel approach to predict the traffic flow of MEC in 5G/6G vehicular networks using graph-based learning. In our framework, MEC servers in vehicular networks are construed as nodes to construct a dynamic similarity graph and a dynamic transition graph over a duration of multiple days. We utilize Graph Attention Networks (GAT) to learn and fuse the node embeddings of these dynamic graphs. A transformer model is subsequently employed to predict the vehicle frequency accessing the edge computing services for the next day. Our experimental results have shown that the model achieves high accuracy in predicting edge service access volumes with low error metrics.</p></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2024-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141953674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信