Research on Dynamic Subsidy Based on Deep Reinforcement Learning for Non-Stationary Stochastic Demand in Ride-Hailing

Sustainability Pub Date : 2024-07-23 DOI:10.3390/su16156289
Xiangyu Huang, Yan Cheng, Jing Jin, Aiqing Kou
{"title":"Research on Dynamic Subsidy Based on Deep Reinforcement Learning for Non-Stationary Stochastic Demand in Ride-Hailing","authors":"Xiangyu Huang, Yan Cheng, Jing Jin, Aiqing Kou","doi":"10.3390/su16156289","DOIUrl":null,"url":null,"abstract":"The ride-hailing market often experiences significant fluctuations in traffic demand, resulting in supply-demand imbalances. In this regard, the dynamic subsidy strategy is frequently employed by ride-hailing platforms to incentivize drivers to relocate to zones with high demand. However, determining the appropriate amount of subsidy at the appropriate time remains challenging. First, traffic demand exhibits high non-stationarity, characterized by multi-context patterns with time-varying statistical features. Second, high-dimensional state/action spaces contain multiple spatiotemporal dimensions and context patterns. Third, decision-making should satisfy real-time requirements. To address the above challenges, we first construct a Non-Stationary Markov Decision Process (NSMDP) based on the assumption of ride-hailing service systems dynamics. Then, we develop a solution framework for the NSMDP. A change point detection method based on feature-enhanced LSTM within the framework can identify the changepoints and time-varying context patterns of stochastic demand. Moreover, the framework also includes a deterministic policy deep reinforcement learning algorithm to optimize. Finally, through simulated experiments with real-world historical data, we demonstrate the effectiveness of the proposed approach. It performs well in improving the platform’s profits and alleviating supply-demand imbalances under the dynamic subsidy strategy. The results also prove that a scientific dynamic subsidy strategy is particularly effective in the high-demand context pattern with more drastic fluctuations. Additionally, the profitability of dynamic subsidy strategy will increase with the increase of the non-stationary level.","PeriodicalId":509360,"journal":{"name":"Sustainability","volume":"101 15","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Sustainability","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3390/su16156289","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The ride-hailing market often experiences significant fluctuations in traffic demand, resulting in supply-demand imbalances. In this regard, the dynamic subsidy strategy is frequently employed by ride-hailing platforms to incentivize drivers to relocate to zones with high demand. However, determining the appropriate amount of subsidy at the appropriate time remains challenging. First, traffic demand exhibits high non-stationarity, characterized by multi-context patterns with time-varying statistical features. Second, high-dimensional state/action spaces contain multiple spatiotemporal dimensions and context patterns. Third, decision-making should satisfy real-time requirements. To address the above challenges, we first construct a Non-Stationary Markov Decision Process (NSMDP) based on the assumption of ride-hailing service systems dynamics. Then, we develop a solution framework for the NSMDP. A change point detection method based on feature-enhanced LSTM within the framework can identify the changepoints and time-varying context patterns of stochastic demand. Moreover, the framework also includes a deterministic policy deep reinforcement learning algorithm to optimize. Finally, through simulated experiments with real-world historical data, we demonstrate the effectiveness of the proposed approach. It performs well in improving the platform’s profits and alleviating supply-demand imbalances under the dynamic subsidy strategy. The results also prove that a scientific dynamic subsidy strategy is particularly effective in the high-demand context pattern with more drastic fluctuations. Additionally, the profitability of dynamic subsidy strategy will increase with the increase of the non-stationary level.
基于深度强化学习的非静态随机乘车需求动态补贴研究
打车市场的交通需求经常出现大幅波动,导致供需失衡。为此,打车平台经常采用动态补贴策略,激励司机迁往需求旺盛的区域。然而,在适当的时间确定适当的补贴额度仍然具有挑战性。首先,交通需求表现出高度的非平稳性,其特点是具有时变统计特征的多背景模式。其次,高维状态/行动空间包含多个时空维度和情境模式。第三,决策应满足实时性要求。为应对上述挑战,我们首先基于打车服务系统动态假设,构建了非静态马尔可夫决策过程(NSMDP)。然后,我们开发了一个 NSMDP 的求解框架。框架中基于特征增强 LSTM 的变化点检测方法可以识别随机需求的变化点和时变背景模式。此外,该框架还包含一种确定性策略深度强化学习算法来进行优化。最后,通过对真实世界历史数据的模拟实验,我们证明了所提方法的有效性。在动态补贴策略下,它在提高平台利润和缓解供需失衡方面表现良好。结果还证明,科学的动态补贴策略在波动更为剧烈的高需求模式下尤为有效。此外,动态补贴策略的盈利能力会随着非稳态水平的增加而提高。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信