{"title":"智能网络驱动的边缘计算中的高能效智能共享:深度强化学习方法","authors":"Junfeng Xie;Qingmin Jia;Youxing Chen","doi":"10.1109/ACCESS.2024.3469956","DOIUrl":null,"url":null,"abstract":"Advanced artificial intelligence (AI) and multi-access edge computing (MEC) technologies facilitate the development of edge intelligence, enabling the intelligence learned from remote cloud to network edge. To achieve automatic decision-making, the training efficiency and accuracy of AI models are crucial for edge intelligence. However, the collected data volume of each network edge node is limited, which may cause the over-fitting of AI models. To improve the training efficiency and accuracy of AI models for edge intelligence, intelligence networking-empowered edge computing (INEEC) is a promising solution, which enables each network edge node to improve its AI models quickly and economically with the help of other network edge nodes’ sharing of their learned intelligence. Sharing intelligence among network edge nodes efficiently is essential for INEEC. Thus in this paper, we study the intelligence sharing scheme, which aims to maximize the system energy efficiency while ensuring the latency tolerance via jointly optimizing intelligence requesting strategy, transmission power control and computation resource allocation. The system energy efficiency is defined as the ratio of model performance to energy consumption. Taking into account the dynamic characteristics of edge network conditions, the intelligence sharing problem is modeled as a Markov decision process (MDP). Subsequently, a twin delayed deep deterministic policy gradient (TD3)-based algorithm is designed to automatically make the optimal decisions. Finally, by extensive simulation experiments, it is shown that: 1) compared with DDPG and DQN, the proposed algorithm has a better convergence performance; 2) jointly optimizing intelligence requesting strategy, transmission power control and computation resource allocation helps to improve intelligence sharing efficiency; 3) under different parameter settings, the proposed algorithm achieves better results than the benchmark algorithms.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":null,"pages":null},"PeriodicalIF":3.4000,"publicationDate":"2024-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10699330","citationCount":"0","resultStr":"{\"title\":\"Energy-Efficient Intelligence Sharing in Intelligence Networking-Empowered Edge Computing: A Deep Reinforcement Learning Approach\",\"authors\":\"Junfeng Xie;Qingmin Jia;Youxing Chen\",\"doi\":\"10.1109/ACCESS.2024.3469956\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Advanced artificial intelligence (AI) and multi-access edge computing (MEC) technologies facilitate the development of edge intelligence, enabling the intelligence learned from remote cloud to network edge. To achieve automatic decision-making, the training efficiency and accuracy of AI models are crucial for edge intelligence. However, the collected data volume of each network edge node is limited, which may cause the over-fitting of AI models. To improve the training efficiency and accuracy of AI models for edge intelligence, intelligence networking-empowered edge computing (INEEC) is a promising solution, which enables each network edge node to improve its AI models quickly and economically with the help of other network edge nodes’ sharing of their learned intelligence. Sharing intelligence among network edge nodes efficiently is essential for INEEC. Thus in this paper, we study the intelligence sharing scheme, which aims to maximize the system energy efficiency while ensuring the latency tolerance via jointly optimizing intelligence requesting strategy, transmission power control and computation resource allocation. The system energy efficiency is defined as the ratio of model performance to energy consumption. Taking into account the dynamic characteristics of edge network conditions, the intelligence sharing problem is modeled as a Markov decision process (MDP). Subsequently, a twin delayed deep deterministic policy gradient (TD3)-based algorithm is designed to automatically make the optimal decisions. Finally, by extensive simulation experiments, it is shown that: 1) compared with DDPG and DQN, the proposed algorithm has a better convergence performance; 2) jointly optimizing intelligence requesting strategy, transmission power control and computation resource allocation helps to improve intelligence sharing efficiency; 3) under different parameter settings, the proposed algorithm achieves better results than the benchmark algorithms.\",\"PeriodicalId\":13079,\"journal\":{\"name\":\"IEEE Access\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":3.4000,\"publicationDate\":\"2024-09-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10699330\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Access\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10699330/\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Access","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10699330/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
Energy-Efficient Intelligence Sharing in Intelligence Networking-Empowered Edge Computing: A Deep Reinforcement Learning Approach
Advanced artificial intelligence (AI) and multi-access edge computing (MEC) technologies facilitate the development of edge intelligence, enabling the intelligence learned from remote cloud to network edge. To achieve automatic decision-making, the training efficiency and accuracy of AI models are crucial for edge intelligence. However, the collected data volume of each network edge node is limited, which may cause the over-fitting of AI models. To improve the training efficiency and accuracy of AI models for edge intelligence, intelligence networking-empowered edge computing (INEEC) is a promising solution, which enables each network edge node to improve its AI models quickly and economically with the help of other network edge nodes’ sharing of their learned intelligence. Sharing intelligence among network edge nodes efficiently is essential for INEEC. Thus in this paper, we study the intelligence sharing scheme, which aims to maximize the system energy efficiency while ensuring the latency tolerance via jointly optimizing intelligence requesting strategy, transmission power control and computation resource allocation. The system energy efficiency is defined as the ratio of model performance to energy consumption. Taking into account the dynamic characteristics of edge network conditions, the intelligence sharing problem is modeled as a Markov decision process (MDP). Subsequently, a twin delayed deep deterministic policy gradient (TD3)-based algorithm is designed to automatically make the optimal decisions. Finally, by extensive simulation experiments, it is shown that: 1) compared with DDPG and DQN, the proposed algorithm has a better convergence performance; 2) jointly optimizing intelligence requesting strategy, transmission power control and computation resource allocation helps to improve intelligence sharing efficiency; 3) under different parameter settings, the proposed algorithm achieves better results than the benchmark algorithms.
IEEE AccessCOMPUTER SCIENCE, INFORMATION SYSTEMSENGIN-ENGINEERING, ELECTRICAL & ELECTRONIC
CiteScore
9.80
自引率
7.70%
发文量
6673
审稿时长
6 weeks
期刊介绍:
IEEE Access® is a multidisciplinary, open access (OA), applications-oriented, all-electronic archival journal that continuously presents the results of original research or development across all of IEEE''s fields of interest.
IEEE Access will publish articles that are of high interest to readers, original, technically correct, and clearly presented. Supported by author publication charges (APC), its hallmarks are a rapid peer review and publication process with open access to all readers. Unlike IEEE''s traditional Transactions or Journals, reviews are "binary", in that reviewers will either Accept or Reject an article in the form it is submitted in order to achieve rapid turnaround. Especially encouraged are submissions on:
Multidisciplinary topics, or applications-oriented articles and negative results that do not fit within the scope of IEEE''s traditional journals.
Practical articles discussing new experiments or measurement techniques, interesting solutions to engineering.
Development of new or improved fabrication or manufacturing techniques.
Reviews or survey articles of new or evolving fields oriented to assist others in understanding the new area.