{"title":"基于强化学习的罕见事件下城际人类流动性模拟","authors":"Y. Pang, K. Tsubouchi, T. Yabe, Y. Sekimoto","doi":"10.1145/3397536.3422244","DOIUrl":null,"url":null,"abstract":"Agent-based simulations, combined with large scale mobility data, have been an effective method for understanding urban scale human dynamics. However, collecting such large scale human mobility datasets are especially difficult during rare events (e.g., natural disasters), reducing the performance of agent-based simulations. To tackle this problem, we develop an agent-based model that can simulate urban dynamics during rare events by learning from other cities using inverse reinforcement learning. More specifically, in our framework, agents imitate real human-beings' travel behavior from areas where rare events have occurred in the past (source area) and produce synthetic people movement in different cities where such rare events have never occurred (target area). Our framework contains three main stages: 1) recovering the reward function, where the people's travel patterns and preferences are learned from the source areas; 2) transferring the model of the source area to the target areas; 3) simulating the people movement based on learned model in the target area. We apply our approach in various cities for both normal and rare situations using real-world GPS data collected from more than 1 million people in Japan, and show higher simulation performance than previous models.","PeriodicalId":233918,"journal":{"name":"Proceedings of the 28th International Conference on Advances in Geographic Information Systems","volume":"235 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"11","resultStr":"{\"title\":\"Intercity Simulation of Human Mobility at Rare Events via Reinforcement Learning\",\"authors\":\"Y. Pang, K. Tsubouchi, T. Yabe, Y. Sekimoto\",\"doi\":\"10.1145/3397536.3422244\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Agent-based simulations, combined with large scale mobility data, have been an effective method for understanding urban scale human dynamics. However, collecting such large scale human mobility datasets are especially difficult during rare events (e.g., natural disasters), reducing the performance of agent-based simulations. To tackle this problem, we develop an agent-based model that can simulate urban dynamics during rare events by learning from other cities using inverse reinforcement learning. More specifically, in our framework, agents imitate real human-beings' travel behavior from areas where rare events have occurred in the past (source area) and produce synthetic people movement in different cities where such rare events have never occurred (target area). Our framework contains three main stages: 1) recovering the reward function, where the people's travel patterns and preferences are learned from the source areas; 2) transferring the model of the source area to the target areas; 3) simulating the people movement based on learned model in the target area. We apply our approach in various cities for both normal and rare situations using real-world GPS data collected from more than 1 million people in Japan, and show higher simulation performance than previous models.\",\"PeriodicalId\":233918,\"journal\":{\"name\":\"Proceedings of the 28th International Conference on Advances in Geographic Information Systems\",\"volume\":\"235 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-11-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"11\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 28th International Conference on Advances in Geographic Information Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3397536.3422244\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 28th International Conference on Advances in Geographic Information Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3397536.3422244","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Intercity Simulation of Human Mobility at Rare Events via Reinforcement Learning
Agent-based simulations, combined with large scale mobility data, have been an effective method for understanding urban scale human dynamics. However, collecting such large scale human mobility datasets are especially difficult during rare events (e.g., natural disasters), reducing the performance of agent-based simulations. To tackle this problem, we develop an agent-based model that can simulate urban dynamics during rare events by learning from other cities using inverse reinforcement learning. More specifically, in our framework, agents imitate real human-beings' travel behavior from areas where rare events have occurred in the past (source area) and produce synthetic people movement in different cities where such rare events have never occurred (target area). Our framework contains three main stages: 1) recovering the reward function, where the people's travel patterns and preferences are learned from the source areas; 2) transferring the model of the source area to the target areas; 3) simulating the people movement based on learned model in the target area. We apply our approach in various cities for both normal and rare situations using real-world GPS data collected from more than 1 million people in Japan, and show higher simulation performance than previous models.