{"title":"Non-cooperative multi-agent deep reinforcement learning for channel resource allocation in vehicular networks","authors":"Fuxin Zhang, Sihan Yao, Wei Liu, Liang Qi","doi":"10.1016/j.comnet.2024.111006","DOIUrl":null,"url":null,"abstract":"<div><div>Vehicle-to-vehicle (V2V) communication is a critical technology in supporting vehicle safety applications in vehicular networks. The high mobility in vehicular networks makes the channel state change rapidly, which poses significant challenges to reliable V2V communications. The traditional resource allocation methods neglect the fair requirement and cannot guarantee reliable transmission for each V2V link. In this paper, we first develop a network pay-off function to characterize the measure of satisfaction that a V2V link obtains from the network. Based on the pay-off function, the resource allocation problem among V2V links is formulated as a non-cooperation game problem. A non-cooperative multi-agent reinforcement learning method for resource sharing is then constructed. In this method, each V2V link is treated as an agent. Each agent interacts with unknown environments and neighboring agents to learn the best spectrum allocation and power control policy to reach a Nash equilibrium point for each V2V link, where they obtain fair transmissions and achieve reliable communications under different network scenarios. Experimental results indicate that our proposed method outperforms other benchmark schemes by more than 10% in packet delivery probability while achieving fair transmissions for V2V links.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"257 ","pages":"Article 111006"},"PeriodicalIF":4.4000,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1389128624008387","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0
Abstract
Vehicle-to-vehicle (V2V) communication is a critical technology in supporting vehicle safety applications in vehicular networks. The high mobility in vehicular networks makes the channel state change rapidly, which poses significant challenges to reliable V2V communications. The traditional resource allocation methods neglect the fair requirement and cannot guarantee reliable transmission for each V2V link. In this paper, we first develop a network pay-off function to characterize the measure of satisfaction that a V2V link obtains from the network. Based on the pay-off function, the resource allocation problem among V2V links is formulated as a non-cooperation game problem. A non-cooperative multi-agent reinforcement learning method for resource sharing is then constructed. In this method, each V2V link is treated as an agent. Each agent interacts with unknown environments and neighboring agents to learn the best spectrum allocation and power control policy to reach a Nash equilibrium point for each V2V link, where they obtain fair transmissions and achieve reliable communications under different network scenarios. Experimental results indicate that our proposed method outperforms other benchmark schemes by more than 10% in packet delivery probability while achieving fair transmissions for V2V links.
期刊介绍:
Computer Networks is an international, archival journal providing a publication vehicle for complete coverage of all topics of interest to those involved in the computer communications networking area. The audience includes researchers, managers and operators of networks as well as designers and implementors. The Editorial Board will consider any material for publication that is of interest to those groups.