{"title":"基于学习的随机多无人机系统联合任务分配与系统设计框架","authors":"Inwook Kim, J. R. Morrison","doi":"10.1109/ICUAS.2018.8453318","DOIUrl":null,"url":null,"abstract":"We consider a system of UAVs, depots, service stations and tasks in a stochastic environment. Our goal is to jointly determine the system resources (system design), task allocation and waypoint selection. To our knowledge, none have studied this joint decision problem in the stochastic context. We formulate the problem as a Markov decision process (MDP) and resort to deep reinforcement learning (DRL) to obtain state-based decisions. Numerical studies are conducted to assess the performance of the proposed approach. In small examples for which an optimal policy can be found, the DRL based approach is much faster than value iteration and obtained nearly optimal solutions. In large examples, the DRL based approach can find efficient designs and policies.","PeriodicalId":246293,"journal":{"name":"2018 International Conference on Unmanned Aircraft Systems (ICUAS)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"Learning Based Framework for Joint Task Allocation and System Design in Stochastic Multi-UAV Systems\",\"authors\":\"Inwook Kim, J. R. Morrison\",\"doi\":\"10.1109/ICUAS.2018.8453318\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We consider a system of UAVs, depots, service stations and tasks in a stochastic environment. Our goal is to jointly determine the system resources (system design), task allocation and waypoint selection. To our knowledge, none have studied this joint decision problem in the stochastic context. We formulate the problem as a Markov decision process (MDP) and resort to deep reinforcement learning (DRL) to obtain state-based decisions. Numerical studies are conducted to assess the performance of the proposed approach. In small examples for which an optimal policy can be found, the DRL based approach is much faster than value iteration and obtained nearly optimal solutions. In large examples, the DRL based approach can find efficient designs and policies.\",\"PeriodicalId\":246293,\"journal\":{\"name\":\"2018 International Conference on Unmanned Aircraft Systems (ICUAS)\",\"volume\":\"2 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 International Conference on Unmanned Aircraft Systems (ICUAS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICUAS.2018.8453318\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 International Conference on Unmanned Aircraft Systems (ICUAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICUAS.2018.8453318","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Learning Based Framework for Joint Task Allocation and System Design in Stochastic Multi-UAV Systems
We consider a system of UAVs, depots, service stations and tasks in a stochastic environment. Our goal is to jointly determine the system resources (system design), task allocation and waypoint selection. To our knowledge, none have studied this joint decision problem in the stochastic context. We formulate the problem as a Markov decision process (MDP) and resort to deep reinforcement learning (DRL) to obtain state-based decisions. Numerical studies are conducted to assess the performance of the proposed approach. In small examples for which an optimal policy can be found, the DRL based approach is much faster than value iteration and obtained nearly optimal solutions. In large examples, the DRL based approach can find efficient designs and policies.