基于图神经邻域搜索的飞行器应急调度

Tong Guo;Yi Mei;Wenbo Du;Yisheng Lv;Yumeng Li;Tao Song
{"title":"基于图神经邻域搜索的飞行器应急调度","authors":"Tong Guo;Yi Mei;Wenbo Du;Yisheng Lv;Yumeng Li;Tao Song","doi":"10.1109/TAI.2025.3528381","DOIUrl":null,"url":null,"abstract":"The thriving advances in autonomous vehicles and aviation have enabled the efficient implementation of aerial last-mile delivery services to meet the pressing demand for urgent relief supply distribution. Variable neighborhood search (VNS) is a promising technique for aerial emergency scheduling. However, the existing VNS methods usually exhaustively explore all considered neighborhoods with a prefixed order, leading to an inefficient search process and slow convergence speed. To address this issue, this article proposes a novel <bold>g</b>raph n<bold>e</b>ural <bold>n</b>eighborhood <bold>s</b>earch (GENIS) algorithm, which includes an online reinforcement learning (RL) agent that guides the search process by selecting the most appropriate low-level local search operators based on the search state. We develop a dual-graph neural representation learning method to extract comprehensive and informative feature representations from the search state. Besides, we propose a reward-shaping policy learning method to address the decaying reward issue along the search process. Extensive experiments conducted across various benchmark instances demonstrate that the proposed algorithm significantly outperforms the state-of-the-art approaches. Further investigations validate the effectiveness of the newly designed knowledge guidance scheme and the learned feature representations.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"6 7","pages":"1808-1822"},"PeriodicalIF":0.0000,"publicationDate":"2025-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Emergency Scheduling of Aerial Vehicles via Graph Neural Neighborhood Search\",\"authors\":\"Tong Guo;Yi Mei;Wenbo Du;Yisheng Lv;Yumeng Li;Tao Song\",\"doi\":\"10.1109/TAI.2025.3528381\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The thriving advances in autonomous vehicles and aviation have enabled the efficient implementation of aerial last-mile delivery services to meet the pressing demand for urgent relief supply distribution. Variable neighborhood search (VNS) is a promising technique for aerial emergency scheduling. However, the existing VNS methods usually exhaustively explore all considered neighborhoods with a prefixed order, leading to an inefficient search process and slow convergence speed. To address this issue, this article proposes a novel <bold>g</b>raph n<bold>e</b>ural <bold>n</b>eighborhood <bold>s</b>earch (GENIS) algorithm, which includes an online reinforcement learning (RL) agent that guides the search process by selecting the most appropriate low-level local search operators based on the search state. We develop a dual-graph neural representation learning method to extract comprehensive and informative feature representations from the search state. Besides, we propose a reward-shaping policy learning method to address the decaying reward issue along the search process. Extensive experiments conducted across various benchmark instances demonstrate that the proposed algorithm significantly outperforms the state-of-the-art approaches. Further investigations validate the effectiveness of the newly designed knowledge guidance scheme and the learned feature representations.\",\"PeriodicalId\":73305,\"journal\":{\"name\":\"IEEE transactions on artificial intelligence\",\"volume\":\"6 7\",\"pages\":\"1808-1822\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-01-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on artificial intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10838579/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on artificial intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10838579/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

自动驾驶汽车和航空的蓬勃发展,使空中最后一英里配送服务得以有效实施,以满足紧急救援物资分配的迫切需求。变邻域搜索(VNS)是一种很有前途的空中应急调度技术。然而,现有的VNS方法通常以预先确定的顺序穷尽搜索所有考虑的邻域,导致搜索过程效率低下,收敛速度慢。为了解决这个问题,本文提出了一种新的图神经邻域搜索(GENIS)算法,该算法包括一个在线强化学习(RL)代理,该代理通过根据搜索状态选择最合适的低级本地搜索算子来指导搜索过程。我们开发了一种双图神经表示学习方法,从搜索状态中提取全面且信息丰富的特征表示。此外,我们还提出了一种奖励塑造策略学习方法来解决搜索过程中奖励衰减的问题。在各种基准实例中进行的大量实验表明,所提出的算法明显优于最先进的方法。进一步的研究验证了新设计的知识引导方案和学习到的特征表示的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Emergency Scheduling of Aerial Vehicles via Graph Neural Neighborhood Search
The thriving advances in autonomous vehicles and aviation have enabled the efficient implementation of aerial last-mile delivery services to meet the pressing demand for urgent relief supply distribution. Variable neighborhood search (VNS) is a promising technique for aerial emergency scheduling. However, the existing VNS methods usually exhaustively explore all considered neighborhoods with a prefixed order, leading to an inefficient search process and slow convergence speed. To address this issue, this article proposes a novel graph neural neighborhood search (GENIS) algorithm, which includes an online reinforcement learning (RL) agent that guides the search process by selecting the most appropriate low-level local search operators based on the search state. We develop a dual-graph neural representation learning method to extract comprehensive and informative feature representations from the search state. Besides, we propose a reward-shaping policy learning method to address the decaying reward issue along the search process. Extensive experiments conducted across various benchmark instances demonstrate that the proposed algorithm significantly outperforms the state-of-the-art approaches. Further investigations validate the effectiveness of the newly designed knowledge guidance scheme and the learned feature representations.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
7.70
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信