交叉验证目标最大似然估计的性能。

IF 1.8 4区 医学 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY
Matthew J Smith, Rachael V Phillips, Camille Maringe, Miguel Angel Luque-Fernandez
{"title":"交叉验证目标最大似然估计的性能。","authors":"Matthew J Smith, Rachael V Phillips, Camille Maringe, Miguel Angel Luque-Fernandez","doi":"10.1002/sim.70185","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Advanced methods for causal inference, such as targeted maximum likelihood estimation (TMLE), require specific convergence rates and the Donsker class condition for valid statistical estimation and inference. In situations where there is no differentiability due to data sparsity or near-positivity violations, the Donsker class condition is violated. In such instances, the bias of the targeted estimand is inflated, and its variance is anti-conservative, leading to poor coverage. Cross-validation of the TMLE algorithm (CVTMLE) is a straightforward, yet effective way to ensure efficiency, especially in settings where the Donsker class condition is violated, such as random or near-positivity violations. We aim to investigate the performance of CVTMLE compared to TMLE in various settings.</p><p><strong>Methods: </strong>We utilized the data-generating mechanism described in Leger et al. (2022) to run a Monte Carlo experiment under different Donsker class violations. Then, we evaluated the respective statistical performances of TMLE and CVTMLE with different super learner libraries, with and without regression tree methods.</p><p><strong>Results: </strong>We found that CVTMLE vastly improves confidence interval coverage without adversely affecting bias, particularly in settings with small sample sizes and near-positivity violations. Furthermore, incorporating regression trees using standard TMLE with ensemble super learner-based initial estimates increases bias and reduces variance, leading to invalid statistical inference.</p><p><strong>Conclusions: </strong>We show through simulations that CVTMLE is much less sensitive to the choice of the super learner library and thereby provides better estimation and inference in cases where the super learner library uses more flexible candidates and is prone to overfitting.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"44 15-17","pages":"e70185"},"PeriodicalIF":1.8000,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12270713/pdf/","citationCount":"0","resultStr":"{\"title\":\"Performance of Cross-Validated Targeted Maximum Likelihood Estimation.\",\"authors\":\"Matthew J Smith, Rachael V Phillips, Camille Maringe, Miguel Angel Luque-Fernandez\",\"doi\":\"10.1002/sim.70185\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>Advanced methods for causal inference, such as targeted maximum likelihood estimation (TMLE), require specific convergence rates and the Donsker class condition for valid statistical estimation and inference. In situations where there is no differentiability due to data sparsity or near-positivity violations, the Donsker class condition is violated. In such instances, the bias of the targeted estimand is inflated, and its variance is anti-conservative, leading to poor coverage. Cross-validation of the TMLE algorithm (CVTMLE) is a straightforward, yet effective way to ensure efficiency, especially in settings where the Donsker class condition is violated, such as random or near-positivity violations. We aim to investigate the performance of CVTMLE compared to TMLE in various settings.</p><p><strong>Methods: </strong>We utilized the data-generating mechanism described in Leger et al. (2022) to run a Monte Carlo experiment under different Donsker class violations. Then, we evaluated the respective statistical performances of TMLE and CVTMLE with different super learner libraries, with and without regression tree methods.</p><p><strong>Results: </strong>We found that CVTMLE vastly improves confidence interval coverage without adversely affecting bias, particularly in settings with small sample sizes and near-positivity violations. Furthermore, incorporating regression trees using standard TMLE with ensemble super learner-based initial estimates increases bias and reduces variance, leading to invalid statistical inference.</p><p><strong>Conclusions: </strong>We show through simulations that CVTMLE is much less sensitive to the choice of the super learner library and thereby provides better estimation and inference in cases where the super learner library uses more flexible candidates and is prone to overfitting.</p>\",\"PeriodicalId\":21879,\"journal\":{\"name\":\"Statistics in Medicine\",\"volume\":\"44 15-17\",\"pages\":\"e70185\"},\"PeriodicalIF\":1.8000,\"publicationDate\":\"2025-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12270713/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Statistics in Medicine\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1002/sim.70185\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"MATHEMATICAL & COMPUTATIONAL BIOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Statistics in Medicine","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1002/sim.70185","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"MATHEMATICAL & COMPUTATIONAL BIOLOGY","Score":null,"Total":0}
引用次数: 0

摘要

背景:先进的因果推理方法,如目标最大似然估计(TMLE),需要特定的收敛率和Donsker类条件才能进行有效的统计估计和推理。在由于数据稀疏性或近正性违反而没有可微性的情况下,违反了Donsker类条件。在这种情况下,目标估计的偏差被夸大,其方差是反保守的,导致覆盖面差。交叉验证TMLE算法(CVTMLE)是确保效率的一种直接而有效的方法,特别是在违反Donsker类条件的设置中,例如随机或接近正违例。我们的目标是研究CVTMLE与TMLE在不同环境下的性能。方法:我们利用Leger et al.(2022)中描述的数据生成机制,在不同的Donsker类违规情况下运行蒙特卡罗实验。然后,我们评估了不同的超级学习器库在使用和不使用回归树方法的情况下,TMLE和CVTMLE各自的统计性能。结果:我们发现CVTMLE极大地提高了置信区间覆盖率,而不会对偏倚产生不利影响,特别是在小样本量和接近正违规的情况下。此外,将使用标准TMLE的回归树与基于集成超级学习器的初始估计相结合会增加偏差并减少方差,从而导致无效的统计推断。结论:我们通过模拟表明,CVTMLE对超级学习器库的选择不太敏感,因此在超级学习器库使用更灵活的候选对象并且容易过度拟合的情况下,可以提供更好的估计和推断。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Performance of Cross-Validated Targeted Maximum Likelihood Estimation.

Background: Advanced methods for causal inference, such as targeted maximum likelihood estimation (TMLE), require specific convergence rates and the Donsker class condition for valid statistical estimation and inference. In situations where there is no differentiability due to data sparsity or near-positivity violations, the Donsker class condition is violated. In such instances, the bias of the targeted estimand is inflated, and its variance is anti-conservative, leading to poor coverage. Cross-validation of the TMLE algorithm (CVTMLE) is a straightforward, yet effective way to ensure efficiency, especially in settings where the Donsker class condition is violated, such as random or near-positivity violations. We aim to investigate the performance of CVTMLE compared to TMLE in various settings.

Methods: We utilized the data-generating mechanism described in Leger et al. (2022) to run a Monte Carlo experiment under different Donsker class violations. Then, we evaluated the respective statistical performances of TMLE and CVTMLE with different super learner libraries, with and without regression tree methods.

Results: We found that CVTMLE vastly improves confidence interval coverage without adversely affecting bias, particularly in settings with small sample sizes and near-positivity violations. Furthermore, incorporating regression trees using standard TMLE with ensemble super learner-based initial estimates increases bias and reduces variance, leading to invalid statistical inference.

Conclusions: We show through simulations that CVTMLE is much less sensitive to the choice of the super learner library and thereby provides better estimation and inference in cases where the super learner library uses more flexible candidates and is prone to overfitting.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Statistics in Medicine
Statistics in Medicine 医学-公共卫生、环境卫生与职业卫生
CiteScore
3.40
自引率
10.00%
发文量
334
审稿时长
2-4 weeks
期刊介绍: The journal aims to influence practice in medicine and its associated sciences through the publication of papers on statistical and other quantitative methods. Papers will explain new methods and demonstrate their application, preferably through a substantive, real, motivating example or a comprehensive evaluation based on an illustrative example. Alternatively, papers will report on case-studies where creative use or technical generalizations of established methodology is directed towards a substantive application. Reviews of, and tutorials on, general topics relevant to the application of statistics to medicine will also be published. The main criteria for publication are appropriateness of the statistical methods to a particular medical problem and clarity of exposition. Papers with primarily mathematical content will be excluded. The journal aims to enhance communication between statisticians, clinicians and medical researchers.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信