{"title":"基于深度强化学习的非静态随机乘车需求动态补贴研究","authors":"Xiangyu Huang, Yan Cheng, Jing Jin, Aiqing Kou","doi":"10.3390/su16156289","DOIUrl":null,"url":null,"abstract":"The ride-hailing market often experiences significant fluctuations in traffic demand, resulting in supply-demand imbalances. In this regard, the dynamic subsidy strategy is frequently employed by ride-hailing platforms to incentivize drivers to relocate to zones with high demand. However, determining the appropriate amount of subsidy at the appropriate time remains challenging. First, traffic demand exhibits high non-stationarity, characterized by multi-context patterns with time-varying statistical features. Second, high-dimensional state/action spaces contain multiple spatiotemporal dimensions and context patterns. Third, decision-making should satisfy real-time requirements. To address the above challenges, we first construct a Non-Stationary Markov Decision Process (NSMDP) based on the assumption of ride-hailing service systems dynamics. Then, we develop a solution framework for the NSMDP. A change point detection method based on feature-enhanced LSTM within the framework can identify the changepoints and time-varying context patterns of stochastic demand. Moreover, the framework also includes a deterministic policy deep reinforcement learning algorithm to optimize. Finally, through simulated experiments with real-world historical data, we demonstrate the effectiveness of the proposed approach. It performs well in improving the platform’s profits and alleviating supply-demand imbalances under the dynamic subsidy strategy. The results also prove that a scientific dynamic subsidy strategy is particularly effective in the high-demand context pattern with more drastic fluctuations. Additionally, the profitability of dynamic subsidy strategy will increase with the increase of the non-stationary level.","PeriodicalId":509360,"journal":{"name":"Sustainability","volume":"101 15","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Research on Dynamic Subsidy Based on Deep Reinforcement Learning for Non-Stationary Stochastic Demand in Ride-Hailing\",\"authors\":\"Xiangyu Huang, Yan Cheng, Jing Jin, Aiqing Kou\",\"doi\":\"10.3390/su16156289\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The ride-hailing market often experiences significant fluctuations in traffic demand, resulting in supply-demand imbalances. In this regard, the dynamic subsidy strategy is frequently employed by ride-hailing platforms to incentivize drivers to relocate to zones with high demand. However, determining the appropriate amount of subsidy at the appropriate time remains challenging. First, traffic demand exhibits high non-stationarity, characterized by multi-context patterns with time-varying statistical features. Second, high-dimensional state/action spaces contain multiple spatiotemporal dimensions and context patterns. Third, decision-making should satisfy real-time requirements. To address the above challenges, we first construct a Non-Stationary Markov Decision Process (NSMDP) based on the assumption of ride-hailing service systems dynamics. Then, we develop a solution framework for the NSMDP. A change point detection method based on feature-enhanced LSTM within the framework can identify the changepoints and time-varying context patterns of stochastic demand. Moreover, the framework also includes a deterministic policy deep reinforcement learning algorithm to optimize. Finally, through simulated experiments with real-world historical data, we demonstrate the effectiveness of the proposed approach. It performs well in improving the platform’s profits and alleviating supply-demand imbalances under the dynamic subsidy strategy. The results also prove that a scientific dynamic subsidy strategy is particularly effective in the high-demand context pattern with more drastic fluctuations. Additionally, the profitability of dynamic subsidy strategy will increase with the increase of the non-stationary level.\",\"PeriodicalId\":509360,\"journal\":{\"name\":\"Sustainability\",\"volume\":\"101 15\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-07-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Sustainability\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.3390/su16156289\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Sustainability","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3390/su16156289","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Research on Dynamic Subsidy Based on Deep Reinforcement Learning for Non-Stationary Stochastic Demand in Ride-Hailing
The ride-hailing market often experiences significant fluctuations in traffic demand, resulting in supply-demand imbalances. In this regard, the dynamic subsidy strategy is frequently employed by ride-hailing platforms to incentivize drivers to relocate to zones with high demand. However, determining the appropriate amount of subsidy at the appropriate time remains challenging. First, traffic demand exhibits high non-stationarity, characterized by multi-context patterns with time-varying statistical features. Second, high-dimensional state/action spaces contain multiple spatiotemporal dimensions and context patterns. Third, decision-making should satisfy real-time requirements. To address the above challenges, we first construct a Non-Stationary Markov Decision Process (NSMDP) based on the assumption of ride-hailing service systems dynamics. Then, we develop a solution framework for the NSMDP. A change point detection method based on feature-enhanced LSTM within the framework can identify the changepoints and time-varying context patterns of stochastic demand. Moreover, the framework also includes a deterministic policy deep reinforcement learning algorithm to optimize. Finally, through simulated experiments with real-world historical data, we demonstrate the effectiveness of the proposed approach. It performs well in improving the platform’s profits and alleviating supply-demand imbalances under the dynamic subsidy strategy. The results also prove that a scientific dynamic subsidy strategy is particularly effective in the high-demand context pattern with more drastic fluctuations. Additionally, the profitability of dynamic subsidy strategy will increase with the increase of the non-stationary level.