Strategic maneuver and disruption with reinforcement learning approaches for multi-agent coordination

Derrik E. Asher, Anjon Basak, Rolando Fernandez, P. Sharma, Erin G. Zaroukian, Christopher D. Hsu, M. Dorothy, Thomas Mahre, Gerardo Galindo, Luke Frerichs, J. Rogers, J. Fossaceca
{"title":"Strategic maneuver and disruption with reinforcement learning approaches for multi-agent coordination","authors":"Derrik E. Asher, Anjon Basak, Rolando Fernandez, P. Sharma, Erin G. Zaroukian, Christopher D. Hsu, M. Dorothy, Thomas Mahre, Gerardo Galindo, Luke Frerichs, J. Rogers, J. Fossaceca","doi":"10.1177/15485129221104096","DOIUrl":null,"url":null,"abstract":"Reinforcement learning (RL) approaches can illuminate emergent behaviors that facilitate coordination across teams of agents as part of a multi-agent system (MAS), which can provide windows of opportunity in various military tasks. Technologically advancing adversaries pose substantial risks to a friendly nation’s interests and resources. Superior resources alone are not enough to defeat adversaries in modern complex environments because adversaries create standoff in multiple domains against predictable military doctrine-based maneuvers. Therefore, as part of a defense strategy, friendly forces must use strategic maneuvers and disruption to gain superiority in complex multi-faceted domains, such as multi-domain operations (MDOs). One promising avenue for implementing strategic maneuver and disruption to gain superiority over adversaries is through coordination of MAS in future military operations. In this paper, we present overviews of prominent works in the RL domain with their strengths and weaknesses for overcoming the challenges associated with performing autonomous strategic maneuver and disruption in military contexts.","PeriodicalId":223838,"journal":{"name":"The Journal of Defense Modeling and Simulation","volume":"116 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The Journal of Defense Modeling and Simulation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1177/15485129221104096","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Reinforcement learning (RL) approaches can illuminate emergent behaviors that facilitate coordination across teams of agents as part of a multi-agent system (MAS), which can provide windows of opportunity in various military tasks. Technologically advancing adversaries pose substantial risks to a friendly nation’s interests and resources. Superior resources alone are not enough to defeat adversaries in modern complex environments because adversaries create standoff in multiple domains against predictable military doctrine-based maneuvers. Therefore, as part of a defense strategy, friendly forces must use strategic maneuvers and disruption to gain superiority in complex multi-faceted domains, such as multi-domain operations (MDOs). One promising avenue for implementing strategic maneuver and disruption to gain superiority over adversaries is through coordination of MAS in future military operations. In this paper, we present overviews of prominent works in the RL domain with their strengths and weaknesses for overcoming the challenges associated with performing autonomous strategic maneuver and disruption in military contexts.
基于强化学习方法的多智能体协调策略机动与中断
强化学习(RL)方法可以阐明作为多智能体系统(MAS)一部分的智能体团队之间促进协调的紧急行为,这可以为各种军事任务提供机会之窗。技术先进的对手对友好国家的利益和资源构成重大威胁。仅靠优势资源不足以在现代复杂环境中击败对手,因为对手在多个领域制造对峙,以对抗可预测的军事理论为基础的机动。因此,作为防御战略的一部分,友军必须使用战略机动和干扰来在复杂的多面领域获得优势,例如多域作战(MDOs)。实施战略机动和干扰以获得对对手优势的一个有希望的途径是通过在未来军事行动中协调MAS。在本文中,我们概述了RL领域的杰出作品及其优缺点,以克服与军事背景下执行自主战略机动和中断相关的挑战。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信