贝叶斯正则化方法是多水平动态潜变量模型必须的吗?

IF 4.6 2区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL
Vivato V Andriamiarana, Pascal Kilian, Holger Brandt, Augustin Kelava
{"title":"贝叶斯正则化方法是多水平动态潜变量模型必须的吗?","authors":"Vivato V Andriamiarana, Pascal Kilian, Holger Brandt, Augustin Kelava","doi":"10.3758/s13428-024-02589-9","DOIUrl":null,"url":null,"abstract":"<p><p>Due to the increased availability of intensive longitudinal data, researchers have been able to specify increasingly complex dynamic latent variable models. However, these models present challenges related to overfitting, hierarchical features, non-linearity, and sample size requirements. There are further limitations to be addressed regarding the finite sample performance of priors, including bias, accuracy, and type I error inflation. Bayesian estimation provides the flexibility to treat these issues simultaneously through the use of regularizing priors. In this paper, we aim to compare several Bayesian regularizing priors (ridge, Bayesian Lasso, adaptive spike-and-slab Lasso, and regularized horseshoe). To achieve this, we introduce a multilevel dynamic latent variable model. We then conduct two simulation studies and a prior sensitivity analysis using empirical data. The results show that the ridge prior is able to provide sparse estimation while avoiding overshrinkage of relevant signals, in comparison to other Bayesian regularization priors. In addition, we find that the Lasso and heavy-tailed regularizing priors do not perform well compared to light-tailed priors for the logistic model. In the context of multilevel dynamic latent variable modeling, it is often attractive to diversify the choice of priors. However, we instead suggest prioritizing the choice of ridge priors without extreme shrinkage, which we show can handle the trade-off between informativeness and generality, compared to other priors with high concentration around zero and/or heavy tails.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 2","pages":"71"},"PeriodicalIF":4.6000,"publicationDate":"2025-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11754388/pdf/","citationCount":"0","resultStr":"{\"title\":\"Are Bayesian regularization methods a must for multilevel dynamic latent variables models?\",\"authors\":\"Vivato V Andriamiarana, Pascal Kilian, Holger Brandt, Augustin Kelava\",\"doi\":\"10.3758/s13428-024-02589-9\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Due to the increased availability of intensive longitudinal data, researchers have been able to specify increasingly complex dynamic latent variable models. However, these models present challenges related to overfitting, hierarchical features, non-linearity, and sample size requirements. There are further limitations to be addressed regarding the finite sample performance of priors, including bias, accuracy, and type I error inflation. Bayesian estimation provides the flexibility to treat these issues simultaneously through the use of regularizing priors. In this paper, we aim to compare several Bayesian regularizing priors (ridge, Bayesian Lasso, adaptive spike-and-slab Lasso, and regularized horseshoe). To achieve this, we introduce a multilevel dynamic latent variable model. We then conduct two simulation studies and a prior sensitivity analysis using empirical data. The results show that the ridge prior is able to provide sparse estimation while avoiding overshrinkage of relevant signals, in comparison to other Bayesian regularization priors. In addition, we find that the Lasso and heavy-tailed regularizing priors do not perform well compared to light-tailed priors for the logistic model. In the context of multilevel dynamic latent variable modeling, it is often attractive to diversify the choice of priors. However, we instead suggest prioritizing the choice of ridge priors without extreme shrinkage, which we show can handle the trade-off between informativeness and generality, compared to other priors with high concentration around zero and/or heavy tails.</p>\",\"PeriodicalId\":8717,\"journal\":{\"name\":\"Behavior Research Methods\",\"volume\":\"57 2\",\"pages\":\"71\"},\"PeriodicalIF\":4.6000,\"publicationDate\":\"2025-01-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11754388/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Behavior Research Methods\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://doi.org/10.3758/s13428-024-02589-9\",\"RegionNum\":2,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"PSYCHOLOGY, EXPERIMENTAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Behavior Research Methods","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.3758/s13428-024-02589-9","RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
引用次数: 0

摘要

由于密集的纵向数据的可用性增加,研究人员已经能够指定越来越复杂的动态潜在变量模型。然而,这些模型提出了与过拟合、分层特征、非线性和样本量要求相关的挑战。关于先验的有限样本性能,还有进一步的限制需要解决,包括偏差、精度和I型误差膨胀。贝叶斯估计通过使用正则化先验提供了同时处理这些问题的灵活性。在本文中,我们的目的是比较几种贝叶斯正则化先验(脊,贝叶斯Lasso,自适应钉板Lasso和正则马蹄)。为了实现这一点,我们引入了一个多层次动态潜变量模型。然后,我们使用经验数据进行两次模拟研究和先验敏感性分析。结果表明,与其他贝叶斯正则化先验相比,脊先验能够在提供稀疏估计的同时避免相关信号的过度收缩。此外,我们发现Lasso和重尾正则化先验在逻辑模型中的表现不如轻尾先验。在多层次动态潜变量建模的背景下,往往有必要使先验选择多样化。然而,我们建议优先选择山脊先验,而不会出现极端收缩,我们表明,与其他先验相比,与零和/或重尾附近的高浓度相比,山脊先验可以处理信息性和一般性之间的权衡。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Are Bayesian regularization methods a must for multilevel dynamic latent variables models?

Due to the increased availability of intensive longitudinal data, researchers have been able to specify increasingly complex dynamic latent variable models. However, these models present challenges related to overfitting, hierarchical features, non-linearity, and sample size requirements. There are further limitations to be addressed regarding the finite sample performance of priors, including bias, accuracy, and type I error inflation. Bayesian estimation provides the flexibility to treat these issues simultaneously through the use of regularizing priors. In this paper, we aim to compare several Bayesian regularizing priors (ridge, Bayesian Lasso, adaptive spike-and-slab Lasso, and regularized horseshoe). To achieve this, we introduce a multilevel dynamic latent variable model. We then conduct two simulation studies and a prior sensitivity analysis using empirical data. The results show that the ridge prior is able to provide sparse estimation while avoiding overshrinkage of relevant signals, in comparison to other Bayesian regularization priors. In addition, we find that the Lasso and heavy-tailed regularizing priors do not perform well compared to light-tailed priors for the logistic model. In the context of multilevel dynamic latent variable modeling, it is often attractive to diversify the choice of priors. However, we instead suggest prioritizing the choice of ridge priors without extreme shrinkage, which we show can handle the trade-off between informativeness and generality, compared to other priors with high concentration around zero and/or heavy tails.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
10.30
自引率
9.30%
发文量
266
期刊介绍: Behavior Research Methods publishes articles concerned with the methods, techniques, and instrumentation of research in experimental psychology. The journal focuses particularly on the use of computer technology in psychological research. An annual special issue is devoted to this field.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信