JOINT MODELING FOR LEARNING DECISION-MAKING DYNAMICS IN BEHAVIORAL EXPERIMENTS.

IF 1.4 4区 数学 Q2 STATISTICS & PROBABILITY
Annals of Applied Statistics Pub Date : 2025-12-01 Epub Date: 2025-12-05 DOI:10.1214/25-aoas2112
Yuan Bian, Xingche Guo, Yuanjia Wang
{"title":"JOINT MODELING FOR LEARNING DECISION-MAKING DYNAMICS IN BEHAVIORAL EXPERIMENTS.","authors":"Yuan Bian, Xingche Guo, Yuanjia Wang","doi":"10.1214/25-aoas2112","DOIUrl":null,"url":null,"abstract":"<p><p>Major depressive disorder (MDD), a leading cause of disability and mortality, is associated with reward-processing abnormalities and concentration issues. Motivated by the probabilistic reward task from the Establishing Moderators and Biosignatures of Antidepressant Response in Clinical Care (EMBARC) study, we propose a novel framework that integrates the reinforcement learning (RL) model and drift-diffusion model (DDM) to jointly analyze reward-based decision-making with response times. To account for emerging evidence suggesting that decision-making may alternate between multiple interleaved strategies, we model latent state switching using a hidden Markov model (HMM). In the engaged state, decisions follow an RL-DDM, simultaneously capturing reward processing, decision dynamics, and temporal structure. In contrast, in the lapsed state, decision-making is modeled using a simplified DDM, where specific parameters are fixed to approximate random guessing with equal probability. The proposed method is implemented using a computationally efficient generalized expectation-maximization (EM) algorithm with forward-backward procedures. Through extensive numerical studies, we demonstrate that our proposed method outperforms competing approaches across various reward-generating distributions, under both strategy-switching and non-switching scenarios, as well as in the presence of input perturbations. When applied to the EMBARC study, our framework reveals that MDD patients exhibit lower overall engagement than healthy controls and experience longer responses when they do engage. Additionally, we show that neuroimaging measures of brain activities are associated with decision-making characteristics in the engaged state but not in the lapsed state, providing evidence of brain-behavior association specific to the engaged state.</p>","PeriodicalId":50772,"journal":{"name":"Annals of Applied Statistics","volume":"19 4","pages":"3372-3393"},"PeriodicalIF":1.4000,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12814034/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Annals of Applied Statistics","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1214/25-aoas2112","RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/12/5 0:00:00","PubModel":"Epub","JCR":"Q2","JCRName":"STATISTICS & PROBABILITY","Score":null,"Total":0}
引用次数: 0

Abstract

Major depressive disorder (MDD), a leading cause of disability and mortality, is associated with reward-processing abnormalities and concentration issues. Motivated by the probabilistic reward task from the Establishing Moderators and Biosignatures of Antidepressant Response in Clinical Care (EMBARC) study, we propose a novel framework that integrates the reinforcement learning (RL) model and drift-diffusion model (DDM) to jointly analyze reward-based decision-making with response times. To account for emerging evidence suggesting that decision-making may alternate between multiple interleaved strategies, we model latent state switching using a hidden Markov model (HMM). In the engaged state, decisions follow an RL-DDM, simultaneously capturing reward processing, decision dynamics, and temporal structure. In contrast, in the lapsed state, decision-making is modeled using a simplified DDM, where specific parameters are fixed to approximate random guessing with equal probability. The proposed method is implemented using a computationally efficient generalized expectation-maximization (EM) algorithm with forward-backward procedures. Through extensive numerical studies, we demonstrate that our proposed method outperforms competing approaches across various reward-generating distributions, under both strategy-switching and non-switching scenarios, as well as in the presence of input perturbations. When applied to the EMBARC study, our framework reveals that MDD patients exhibit lower overall engagement than healthy controls and experience longer responses when they do engage. Additionally, we show that neuroimaging measures of brain activities are associated with decision-making characteristics in the engaged state but not in the lapsed state, providing evidence of brain-behavior association specific to the engaged state.

行为实验中决策动力学学习的联合建模。
重度抑郁症(MDD)是导致残疾和死亡的主要原因,与奖励处理异常和注意力问题有关。基于“临床护理中抗抑郁反应的调节因子和生物特征的建立”(EMBARC)研究的概率奖励任务,我们提出了一个整合强化学习(RL)模型和漂移扩散模型(DDM)的新框架,以共同分析基于反应时间的奖励决策。为了解释新出现的证据表明决策可能在多个交错策略之间交替,我们使用隐马尔可夫模型(HMM)建模潜在状态切换。在参与状态下,决策遵循RL-DDM,同时捕获奖励处理、决策动态和时间结构。在失效状态下,决策模型使用简化的DDM,其中固定特定参数以近似等概率随机猜测。该方法采用一种计算效率高的广义期望最大化(EM)算法实现。通过广泛的数值研究,我们证明了我们提出的方法优于各种奖励生成分布的竞争方法,无论是在策略切换和非切换场景下,还是在存在输入扰动的情况下。当应用于EMBARC研究时,我们的框架揭示了重度抑郁症患者比健康对照者表现出更低的整体参与,并且当他们参与时经历了更长的反应。此外,我们表明,大脑活动的神经成像测量与参与状态下的决策特征相关,而与失神状态无关,这为参与状态下的大脑行为关联提供了证据。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Annals of Applied Statistics
Annals of Applied Statistics 社会科学-统计学与概率论
CiteScore
3.10
自引率
5.60%
发文量
131
审稿时长
6-12 weeks
期刊介绍: Statistical research spans an enormous range from direct subject-matter collaborations to pure mathematical theory. The Annals of Applied Statistics, the newest journal from the IMS, is aimed at papers in the applied half of this range. Published quarterly in both print and electronic form, our goal is to provide a timely and unified forum for all areas of applied statistics.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书