{"title":"使用卡尔曼滤波器的强化学习","authors":"Kei Takahata, T. Miura","doi":"10.1109/ICCICC46617.2019.9146066","DOIUrl":null,"url":null,"abstract":"In this investigation, we discuss a game of pursuit-evasion, or a hunter-prey problems using Q-learning framework. This has always been a popular research subject in the field of robotics where a hunter moves around in pursuit a prey. We involve Kalman filters to estimate the prey's status (location and velocity) and learn Q-values based on the estimated status. We evaluate our approach by convergence of Q-values and capturing steps.","PeriodicalId":294902,"journal":{"name":"2019 IEEE 18th International Conference on Cognitive Informatics & Cognitive Computing (ICCI*CC)","volume":"207 ","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Reinforcement Learning using Kalman Filters\",\"authors\":\"Kei Takahata, T. Miura\",\"doi\":\"10.1109/ICCICC46617.2019.9146066\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this investigation, we discuss a game of pursuit-evasion, or a hunter-prey problems using Q-learning framework. This has always been a popular research subject in the field of robotics where a hunter moves around in pursuit a prey. We involve Kalman filters to estimate the prey's status (location and velocity) and learn Q-values based on the estimated status. We evaluate our approach by convergence of Q-values and capturing steps.\",\"PeriodicalId\":294902,\"journal\":{\"name\":\"2019 IEEE 18th International Conference on Cognitive Informatics & Cognitive Computing (ICCI*CC)\",\"volume\":\"207 \",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 IEEE 18th International Conference on Cognitive Informatics & Cognitive Computing (ICCI*CC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCICC46617.2019.9146066\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE 18th International Conference on Cognitive Informatics & Cognitive Computing (ICCI*CC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCICC46617.2019.9146066","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
In this investigation, we discuss a game of pursuit-evasion, or a hunter-prey problems using Q-learning framework. This has always been a popular research subject in the field of robotics where a hunter moves around in pursuit a prey. We involve Kalman filters to estimate the prey's status (location and velocity) and learn Q-values based on the estimated status. We evaluate our approach by convergence of Q-values and capturing steps.