Bogdan I. Vlahov, Eric Squires, Laura Strickland, Charles Pippin
{"title":"基于强化学习的无人机追逃策略研究","authors":"Bogdan I. Vlahov, Eric Squires, Laura Strickland, Charles Pippin","doi":"10.1109/ICMLA.2018.00138","DOIUrl":null,"url":null,"abstract":"We present an approach for learning a reactive maneuver policy for a UAV involved in a close-quarters one-on-one aerial engagement. Specifically, UAVs with behaviors learned through reinforcement learning can match or improve upon simple, but effective behaviors for intercept. In this paper, a framework for developing reactive policies that can learn to exploit behaviors is discussed. In particular, the A3C algorithm with a deep neural network is applied to the aerial combat domain. The efficacy of the learned policy is demonstrated in Monte Carlo experiments. An architecture that can transfer the learned policy from simulation to an actual aircraft and its effectiveness in live-flight are also demonstrated.","PeriodicalId":6533,"journal":{"name":"2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"16 1","pages":"859-864"},"PeriodicalIF":0.0000,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"29","resultStr":"{\"title\":\"On Developing a UAV Pursuit-Evasion Policy Using Reinforcement Learning\",\"authors\":\"Bogdan I. Vlahov, Eric Squires, Laura Strickland, Charles Pippin\",\"doi\":\"10.1109/ICMLA.2018.00138\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We present an approach for learning a reactive maneuver policy for a UAV involved in a close-quarters one-on-one aerial engagement. Specifically, UAVs with behaviors learned through reinforcement learning can match or improve upon simple, but effective behaviors for intercept. In this paper, a framework for developing reactive policies that can learn to exploit behaviors is discussed. In particular, the A3C algorithm with a deep neural network is applied to the aerial combat domain. The efficacy of the learned policy is demonstrated in Monte Carlo experiments. An architecture that can transfer the learned policy from simulation to an actual aircraft and its effectiveness in live-flight are also demonstrated.\",\"PeriodicalId\":6533,\"journal\":{\"name\":\"2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA)\",\"volume\":\"16 1\",\"pages\":\"859-864\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"29\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICMLA.2018.00138\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICMLA.2018.00138","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
On Developing a UAV Pursuit-Evasion Policy Using Reinforcement Learning
We present an approach for learning a reactive maneuver policy for a UAV involved in a close-quarters one-on-one aerial engagement. Specifically, UAVs with behaviors learned through reinforcement learning can match or improve upon simple, but effective behaviors for intercept. In this paper, a framework for developing reactive policies that can learn to exploit behaviors is discussed. In particular, the A3C algorithm with a deep neural network is applied to the aerial combat domain. The efficacy of the learned policy is demonstrated in Monte Carlo experiments. An architecture that can transfer the learned policy from simulation to an actual aircraft and its effectiveness in live-flight are also demonstrated.