Multi-agent deep reinforcement learning based multiple access for underwater cognitive acoustic sensor networks

IF 4 3区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
Yuzhi Zhang, Xiang Han, Ran Bai, Menglei Jia
{"title":"Multi-agent deep reinforcement learning based multiple access for underwater cognitive acoustic sensor networks","authors":"Yuzhi Zhang,&nbsp;Xiang Han,&nbsp;Ran Bai,&nbsp;Menglei Jia","doi":"10.1016/j.compeleceng.2024.109819","DOIUrl":null,"url":null,"abstract":"<div><div>Considering the challenges posed by the significant propagation delays inherent in underwater cognitive acoustic sensor networks, this paper explores the application of multi-agent deep reinforcement learning for the design of multiple access protocols. We deal with the problem of sharing channels and time slots among multiple sensor nodes that adopt different time-slotted MAC protocols. The multiple intelligent nodes can independently learn the strategies for accessing available idle time slots through the proposed multi-agent deep reinforcement learning (DRL) based multiple access control (MDRL-MAC) protocol. Considering the long propagation delay associated with underwater acoustic channels, we reformulate proper state, action, and reward within the DRL framework to address the multiple access challenges and optimize network throughput. To mitigate the decision deviation stemming from partial observability, the gated recurrent unit (GRU) is integrated into DRL to enhance the deep neural network’s performance. Additionally, to ensure both the maximization of network throughput and the maintenance of fairness among multiple agents, an inspiration mechanism (IM) is proposed to inspire the lazy agent to take more actions to improve its contribution to achieve multi-agent fairness. The simulation results show that the proposed protocol facilitates the convergence of network throughput to optimal levels across various system configurations and environmental conditions.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"120 ","pages":"Article 109819"},"PeriodicalIF":4.0000,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers & Electrical Engineering","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0045790624007468","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

Abstract

Considering the challenges posed by the significant propagation delays inherent in underwater cognitive acoustic sensor networks, this paper explores the application of multi-agent deep reinforcement learning for the design of multiple access protocols. We deal with the problem of sharing channels and time slots among multiple sensor nodes that adopt different time-slotted MAC protocols. The multiple intelligent nodes can independently learn the strategies for accessing available idle time slots through the proposed multi-agent deep reinforcement learning (DRL) based multiple access control (MDRL-MAC) protocol. Considering the long propagation delay associated with underwater acoustic channels, we reformulate proper state, action, and reward within the DRL framework to address the multiple access challenges and optimize network throughput. To mitigate the decision deviation stemming from partial observability, the gated recurrent unit (GRU) is integrated into DRL to enhance the deep neural network’s performance. Additionally, to ensure both the maximization of network throughput and the maintenance of fairness among multiple agents, an inspiration mechanism (IM) is proposed to inspire the lazy agent to take more actions to improve its contribution to achieve multi-agent fairness. The simulation results show that the proposed protocol facilitates the convergence of network throughput to optimal levels across various system configurations and environmental conditions.
基于多代理深度强化学习的水下认知声学传感器网络多重接入
考虑到水下认知声学传感器网络固有的巨大传播延迟所带来的挑战,本文探讨了多代理深度强化学习在多接入协议设计中的应用。我们处理的是采用不同时隙 MAC 协议的多个传感器节点之间共享信道和时隙的问题。通过所提出的基于多代理深度强化学习(DRL)的多重访问控制(MDRL-MAC)协议,多个智能节点可以独立学习访问可用空闲时隙的策略。考虑到与水下声学信道相关的长传播延迟,我们在 DRL 框架内重新制定了适当的状态、行动和奖励,以应对多重接入挑战并优化网络吞吐量。为了减轻部分可观测性带来的决策偏差,我们在 DRL 中集成了门控递归单元(GRU),以提高深度神经网络的性能。此外,为了确保网络吞吐量的最大化和多个代理之间的公平性,还提出了一种激励机制(IM),以激励懒惰代理采取更多行动来提高其贡献,从而实现多代理公平性。仿真结果表明,在不同的系统配置和环境条件下,所提出的协议有助于网络吞吐量收敛到最佳水平。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Computers & Electrical Engineering
Computers & Electrical Engineering 工程技术-工程:电子与电气
CiteScore
9.20
自引率
7.00%
发文量
661
审稿时长
47 days
期刊介绍: The impact of computers has nowhere been more revolutionary than in electrical engineering. The design, analysis, and operation of electrical and electronic systems are now dominated by computers, a transformation that has been motivated by the natural ease of interface between computers and electrical systems, and the promise of spectacular improvements in speed and efficiency. Published since 1973, Computers & Electrical Engineering provides rapid publication of topical research into the integration of computer technology and computational techniques with electrical and electronic systems. The journal publishes papers featuring novel implementations of computers and computational techniques in areas like signal and image processing, high-performance computing, parallel processing, and communications. Special attention will be paid to papers describing innovative architectures, algorithms, and software tools.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信