An Adaptive Markov Game Model for Threat Intent Inference

Dan Shen, Genshe Chen, Jose B Cruz, C. Kwan, M. Kruger
{"title":"An Adaptive Markov Game Model for Threat Intent Inference","authors":"Dan Shen, Genshe Chen, Jose B Cruz, C. Kwan, M. Kruger","doi":"10.1109/AERO.2007.352800","DOIUrl":null,"url":null,"abstract":"In an adversarial military environment, it is important to efficiently and promptly predict the enemy's tactical intent from lower level spatial and temporal information. In this paper, we propose a decentralized Markov game (MG) theoretic approach to estimate the belief of each possible enemy course of action (ECOA), which is utilized to model the adversary intents. It has the following advantages: (1) It is decentralized. Each cluster or team makes decisions mostly based on local information. We put more autonomies in each group allowing for more flexibilities; (2) A Markov decision process (MDP) can effectively model the uncertainties in the noisy military environment; (3) It is a game model with three players: red force (enemies), blue force (friendly forces), and white force (neutral objects); (4) Correlated-Q reinforcement learning is integrated. With the consideration that actual value functions are not normally known and they must be estimated, we integrate correlated-Q learning concept in our game approach to dynamically adjust the payoffs function of each player. A simulation software package has been developed to demonstrate the performance of our proposed algorithms. Simulations have verified that our proposed algorithms are scalable, stable, and satisfactory in performance.","PeriodicalId":6295,"journal":{"name":"2007 IEEE Aerospace Conference","volume":"95 1","pages":"1-13"},"PeriodicalIF":0.0000,"publicationDate":"2007-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2007 IEEE Aerospace Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AERO.2007.352800","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9

Abstract

In an adversarial military environment, it is important to efficiently and promptly predict the enemy's tactical intent from lower level spatial and temporal information. In this paper, we propose a decentralized Markov game (MG) theoretic approach to estimate the belief of each possible enemy course of action (ECOA), which is utilized to model the adversary intents. It has the following advantages: (1) It is decentralized. Each cluster or team makes decisions mostly based on local information. We put more autonomies in each group allowing for more flexibilities; (2) A Markov decision process (MDP) can effectively model the uncertainties in the noisy military environment; (3) It is a game model with three players: red force (enemies), blue force (friendly forces), and white force (neutral objects); (4) Correlated-Q reinforcement learning is integrated. With the consideration that actual value functions are not normally known and they must be estimated, we integrate correlated-Q learning concept in our game approach to dynamically adjust the payoffs function of each player. A simulation software package has been developed to demonstrate the performance of our proposed algorithms. Simulations have verified that our proposed algorithms are scalable, stable, and satisfactory in performance.
威胁意图推理的自适应马尔可夫博弈模型
在敌对的军事环境中,从较低层次的空间和时间信息中有效和迅速地预测敌人的战术意图是很重要的。本文提出了一种分散的马尔可夫博弈(MG)理论方法来估计每个可能的敌人行动过程(ECOA)的信念,并利用该方法对对手的意图进行建模。它有以下优点:(1)它是分散的。每个集群或团队主要根据本地信息做出决策。我们在每个组中加入更多的自主权,以获得更大的灵活性;(2)马尔可夫决策过程(MDP)可以有效地模拟军事噪声环境中的不确定性;(3)这是一个有三个玩家的游戏模式:红色力量(敌人),蓝色力量(友军),白色力量(中立物体);(4)整合了related- q强化学习。考虑到实际的价值函数通常是未知的,必须对其进行估计,我们将相关q学习概念融入到我们的博弈方法中,动态调整每个参与者的收益函数。开发了一个仿真软件包来演示我们提出的算法的性能。仿真结果表明,该算法具有良好的可扩展性、稳定性和令人满意的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信