Deep neural networks for choice analysis: Enhancing behavioral regularity with gradient regularization

IF 7.6 1区 工程技术 Q1 TRANSPORTATION SCIENCE & TECHNOLOGY
Siqi Feng , Rui Yao , Stephane Hess , Ricardo A. Daziano , Timothy Brathwaite , Joan Walker , Shenhao Wang
{"title":"Deep neural networks for choice analysis: Enhancing behavioral regularity with gradient regularization","authors":"Siqi Feng ,&nbsp;Rui Yao ,&nbsp;Stephane Hess ,&nbsp;Ricardo A. Daziano ,&nbsp;Timothy Brathwaite ,&nbsp;Joan Walker ,&nbsp;Shenhao Wang","doi":"10.1016/j.trc.2024.104767","DOIUrl":null,"url":null,"abstract":"<div><p>Deep neural networks (DNNs) have been increasingly applied in travel demand modeling because of their automatic feature learning, high predictive performance, and economic interpretability. Nevertheless, DNNs frequently present behaviorally irregular patterns, significantly limiting their practical potentials and theoretical validity in travel behavior modeling. This study proposes strong and weak behavioral regularities as novel metrics to evaluate the monotonicity of individual demand functions (known as the “law of demand”), and further designs a constrained optimization framework with six gradient regularizers to enhance DNNs’ behavioral regularity. The empirical benefits of this framework are illustrated by applying these regularizers to travel survey data from Chicago and London, which enables us to examine the trade-off between predictive power and behavioral regularity for large versus small sample scenarios and in-domain versus out-of-domain generalizations. The results demonstrate that, unlike models with strong behavioral foundations such as the multinomial logit, the benchmark DNNs cannot guarantee behavioral regularity. However, after applying gradient regularization, we increase DNNs’ behavioral regularity by around 6 percentage points while retaining their relatively high predictive power. In the small sample scenario, gradient regularization is more effective than in the large sample scenario, simultaneously improving behavioral regularity by about 20 percentage points and log-likelihood by around 1.7%. Compared with the in-domain generalization of DNNs, gradient regularization works more effectively in out-of-domain generalization: it drastically improves the behavioral regularity of poorly performing benchmark DNNs by around 65 percentage points, highlighting the criticality of behavioral regularization for improving model transferability and applications in forecasting. Moreover, the proposed optimization framework is applicable to other neural network–based choice models such as TasteNets. Future studies could use behavioral regularity as a metric along with log-likelihood, prediction accuracy, and <span><math><msub><mrow><mi>F</mi></mrow><mrow><mn>1</mn></mrow></msub></math></span> score when evaluating travel demand models, and investigate other methods to further enhance behavioral regularity when adopting complex machine learning models.</p></div>","PeriodicalId":54417,"journal":{"name":"Transportation Research Part C-Emerging Technologies","volume":null,"pages":null},"PeriodicalIF":7.6000,"publicationDate":"2024-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Transportation Research Part C-Emerging Technologies","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0968090X24002882","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"TRANSPORTATION SCIENCE & TECHNOLOGY","Score":null,"Total":0}
引用次数: 0

Abstract

Deep neural networks (DNNs) have been increasingly applied in travel demand modeling because of their automatic feature learning, high predictive performance, and economic interpretability. Nevertheless, DNNs frequently present behaviorally irregular patterns, significantly limiting their practical potentials and theoretical validity in travel behavior modeling. This study proposes strong and weak behavioral regularities as novel metrics to evaluate the monotonicity of individual demand functions (known as the “law of demand”), and further designs a constrained optimization framework with six gradient regularizers to enhance DNNs’ behavioral regularity. The empirical benefits of this framework are illustrated by applying these regularizers to travel survey data from Chicago and London, which enables us to examine the trade-off between predictive power and behavioral regularity for large versus small sample scenarios and in-domain versus out-of-domain generalizations. The results demonstrate that, unlike models with strong behavioral foundations such as the multinomial logit, the benchmark DNNs cannot guarantee behavioral regularity. However, after applying gradient regularization, we increase DNNs’ behavioral regularity by around 6 percentage points while retaining their relatively high predictive power. In the small sample scenario, gradient regularization is more effective than in the large sample scenario, simultaneously improving behavioral regularity by about 20 percentage points and log-likelihood by around 1.7%. Compared with the in-domain generalization of DNNs, gradient regularization works more effectively in out-of-domain generalization: it drastically improves the behavioral regularity of poorly performing benchmark DNNs by around 65 percentage points, highlighting the criticality of behavioral regularization for improving model transferability and applications in forecasting. Moreover, the proposed optimization framework is applicable to other neural network–based choice models such as TasteNets. Future studies could use behavioral regularity as a metric along with log-likelihood, prediction accuracy, and F1 score when evaluating travel demand models, and investigate other methods to further enhance behavioral regularity when adopting complex machine learning models.

用于选择分析的深度神经网络:利用梯度正则化增强行为规律性
深度神经网络(DNN)具有自动特征学习、高预测性能和经济可解释性等特点,因此越来越多地应用于旅行需求建模。然而,DNNs 经常出现行为上的不规则模式,大大限制了其在旅行行为建模中的实用潜力和理论有效性。本研究提出了强行为正则性和弱行为正则性作为评价单个需求函数单调性(即 "需求定律")的新指标,并进一步设计了一个包含六个梯度正则的约束优化框架,以增强 DNNs 的行为正则性。通过将这些正则应用于芝加哥和伦敦的旅游调查数据,我们可以考察在大样本与小样本、域内与域外泛化情况下,预测能力与行为正则之间的权衡,从而说明这一框架的实证优势。结果表明,与多项式对数等具有坚实行为基础的模型不同,基准 DNN 无法保证行为规律性。然而,在应用梯度正则化后,我们将 DNNs 的行为正则性提高了约 6 个百分点,同时保留了其相对较高的预测能力。在小样本情况下,梯度正则化比大样本情况下更有效,可同时将行为正则性提高约 20 个百分点,将对数似然提高约 1.7%。与 DNN 的域内泛化相比,梯度正则化在域外泛化中更为有效:它将性能较差的基准 DNN 的行为正则性大幅提高了约 65 个百分点,突出了行为正则化对于提高模型可移植性和预测应用的重要性。此外,所提出的优化框架也适用于其他基于神经网络的选择模型,如 TasteNets。未来的研究在评估旅行需求模型时,可以将行为正则化与对数概率、预测准确率和 F1 分数一起作为衡量标准,并研究其他方法,以便在采用复杂的机器学习模型时进一步提高行为正则化。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
15.80
自引率
12.00%
发文量
332
审稿时长
64 days
期刊介绍: Transportation Research: Part C (TR_C) is dedicated to showcasing high-quality, scholarly research that delves into the development, applications, and implications of transportation systems and emerging technologies. Our focus lies not solely on individual technologies, but rather on their broader implications for the planning, design, operation, control, maintenance, and rehabilitation of transportation systems, services, and components. In essence, the intellectual core of the journal revolves around the transportation aspect rather than the technology itself. We actively encourage the integration of quantitative methods from diverse fields such as operations research, control systems, complex networks, computer science, and artificial intelligence. Join us in exploring the intersection of transportation systems and emerging technologies to drive innovation and progress in the field.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信