Analytical Methods for a Learning Health System: 2. Design of Observational Studies.

Michael Stoto, Michael Oakes, Elizabeth Stuart, Elisa L Priest, Lucy Savitz
{"title":"Analytical Methods for a Learning Health System: 2. Design of Observational Studies.","authors":"Michael Stoto,&nbsp;Michael Oakes,&nbsp;Elizabeth Stuart,&nbsp;Elisa L Priest,&nbsp;Lucy Savitz","doi":"10.5334/egems.251","DOIUrl":null,"url":null,"abstract":"<p><p>The second paper in a series on how learning health systems can use routinely collected electronic health data (EHD) to advance knowledge and support continuous learning, this review summarizes study design approaches, including choosing appropriate data sources, and methods for design and analysis of natural and quasi-experiments. The primary strength of study design approaches described in this section is that they study the impact of a deliberate intervention in real-world settings, which is critical for external validity. These evaluation designs address estimating the counterfactual - what would have happened if the intervention had not been implemented. At the individual level, epidemiologic designs focus on identifying situations in which bias is minimized. Natural and quasi-experiments focus on situations where the change in assignment breaks the usual links that could lead to confounding, reverse causation, and so forth. And because these observational studies typically use data gathered for patient management or administrative purposes, the possibility of observation bias is minimized. The disadvantages are that one cannot necessarily attribute the effect to the intervention (as opposed to other things that might have changed), and the results do not indicate what about the intervention made a difference. Because they cannot rely on randomization to establish causality, program evaluation methods demand a more careful consideration of the \"theory\" of the intervention and how it is expected to play out. A logic model describing this theory can help to design appropriate comparisons, account for all influential variables in a model, and help to ensure that evaluation studies focus on the critical intermediate and long-term outcomes as well as possible confounders.</p>","PeriodicalId":72880,"journal":{"name":"EGEMS (Washington, DC)","volume":"5 1","pages":"29"},"PeriodicalIF":0.0000,"publicationDate":"2017-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/15/d6/egems-5-1-251.PMC5982802.pdf","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"EGEMS (Washington, DC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5334/egems.251","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8

Abstract

The second paper in a series on how learning health systems can use routinely collected electronic health data (EHD) to advance knowledge and support continuous learning, this review summarizes study design approaches, including choosing appropriate data sources, and methods for design and analysis of natural and quasi-experiments. The primary strength of study design approaches described in this section is that they study the impact of a deliberate intervention in real-world settings, which is critical for external validity. These evaluation designs address estimating the counterfactual - what would have happened if the intervention had not been implemented. At the individual level, epidemiologic designs focus on identifying situations in which bias is minimized. Natural and quasi-experiments focus on situations where the change in assignment breaks the usual links that could lead to confounding, reverse causation, and so forth. And because these observational studies typically use data gathered for patient management or administrative purposes, the possibility of observation bias is minimized. The disadvantages are that one cannot necessarily attribute the effect to the intervention (as opposed to other things that might have changed), and the results do not indicate what about the intervention made a difference. Because they cannot rely on randomization to establish causality, program evaluation methods demand a more careful consideration of the "theory" of the intervention and how it is expected to play out. A logic model describing this theory can help to design appropriate comparisons, account for all influential variables in a model, and help to ensure that evaluation studies focus on the critical intermediate and long-term outcomes as well as possible confounders.

Abstract Image

Abstract Image

Abstract Image

学习型卫生系统的分析方法:2。观察性研究设计。
这是关于学习型卫生系统如何使用常规收集的电子卫生数据(EHD)来推进知识和支持持续学习的系列文章的第二篇,本文综述了研究设计方法,包括选择适当的数据源,以及设计和分析自然实验和准实验的方法。本节中描述的研究设计方法的主要优势在于,它们研究了现实环境中故意干预的影响,这对外部效度至关重要。这些评估设计旨在估计反事实——如果没有实施干预会发生什么。在个体水平上,流行病学设计侧重于确定将偏见最小化的情况。自然实验和准实验关注的是分配的变化打破了通常可能导致混淆、反向因果关系等的联系的情况。由于这些观察性研究通常使用为患者管理或行政目的收集的数据,因此观察偏倚的可能性被最小化。缺点是,人们不一定把效果归因于干预(与其他可能发生变化的事情相反),而且结果并没有表明干预是如何产生影响的。因为它们不能依靠随机化来建立因果关系,所以项目评估方法需要更仔细地考虑干预的“理论”以及预期如何发挥作用。描述这一理论的逻辑模型有助于设计适当的比较,考虑模型中所有有影响的变量,并有助于确保评估研究侧重于关键的中期和长期结果以及可能的混杂因素。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信