{"title":"To use or not to use Sobel's test for hypothesis testing of indirect effects and confidence interval estimation","authors":"Manasi M. Mittinty, Murthy N. Mittinty","doi":"10.1016/j.jclinepi.2024.111461","DOIUrl":"10.1016/j.jclinepi.2024.111461","url":null,"abstract":"","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"173 ","pages":"Article 111461"},"PeriodicalIF":7.3,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141604457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jack Wilkinson , Calvin Heal , George A. Antoniou , Ella Flemyng , Alison Avenell , Virginia Barbour , Esmee M. Bordewijk , Nicholas J.L. Brown , Mike Clarke , Jo Dumville , Steph Grohmann , Lyle C. Gurrin , Jill A. Hayden , Kylie E. Hunter , Emily Lam , Toby Lasserson , Tianjing Li , Sarah Lensen , Jianping Liu , Andreas Lundh , Jamie J. Kirkham
{"title":"A survey of experts to identify methods to detect problematic studies: stage 1 of the INveStigating ProblEmatic Clinical Trials in Systematic Reviews project","authors":"Jack Wilkinson , Calvin Heal , George A. Antoniou , Ella Flemyng , Alison Avenell , Virginia Barbour , Esmee M. Bordewijk , Nicholas J.L. Brown , Mike Clarke , Jo Dumville , Steph Grohmann , Lyle C. Gurrin , Jill A. Hayden , Kylie E. Hunter , Emily Lam , Toby Lasserson , Tianjing Li , Sarah Lensen , Jianping Liu , Andreas Lundh , Jamie J. Kirkham","doi":"10.1016/j.jclinepi.2024.111512","DOIUrl":"10.1016/j.jclinepi.2024.111512","url":null,"abstract":"<div><h3>Background and Objective</h3><div>Randomized controlled trials (RCTs) inform health-care decisions. Unfortunately, some published RCTs contain false data, and some appear to have been entirely fabricated. Systematic reviews are performed to identify and synthesize all RCTs which have been conducted on a given topic. This means that any of these ‘problematic studies’ are likely to be included, but there are no agreed methods for identifying them. The INveStigating ProblEmatic Clinical Trials in Systematic Reviews (INSPECT-SR) project is developing a tool to identify problematic RCTs in systematic reviews of health care-related interventions. The tool will guide the user through a series of ‘checks’ to determine a study's authenticity. The first objective in the development process is to assemble a comprehensive list of checks to consider for inclusion.</div></div><div><h3>Methods</h3><div>We assembled an initial list of checks for assessing the authenticity of research studies, with no restriction to RCTs, and categorized these into five domains: Inspecting results in the paper; Inspecting the research team; Inspecting conduct, governance, and transparency; Inspecting text and publication details; Inspecting the individual participant data. We implemented this list as an online survey, and invited people with expertise and experience of assessing potentially problematic studies to participate through professional networks and online forums. Participants were invited to provide feedback on the checks on the list, and were asked to describe any additional checks they knew of, which were not featured in the list.</div></div><div><h3>Results</h3><div>Extensive feedback on an initial list of 102 checks was provided by 71 participants based in 16 countries across five continents. Fourteen new checks were proposed across the five domains, and suggestions were made to reword checks on the initial list. An updated list of checks was constructed, comprising 116 checks. Many participants expressed a lack of familiarity with statistical checks, and emphasized the importance of feasibility of the tool.</div></div><div><h3>Conclusion</h3><div>A comprehensive list of trustworthiness checks has been produced. The checks will be evaluated to determine which should be included in the INSPECT-SR tool.</div></div><div><h3>Plain Language Summary</h3><div>Systematic reviews draw upon evidence from randomized controlled trials (RCTs) to find out whether treatments are safe and effective. The conclusions from systematic reviews are often very influential, and inform both health-care policy and individual treatment decisions. However, it is now clear that the results of many published RCTs are not genuine. In some cases, the entire study may have been fabricated. It is not usual for the veracity of RCTs to be questioned during the process of compiling a systematic review. As a consequence, these “problematic studies” go unnoticed, and are allowed to contribute to","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"175 ","pages":"Article 111512"},"PeriodicalIF":7.3,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142121112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stuart G. Baker , Marissa N.D. Lassere , Wang Pok Lo
{"title":"Surrogate endpoint metaregression: useful statistics for regulators and trialists","authors":"Stuart G. Baker , Marissa N.D. Lassere , Wang Pok Lo","doi":"10.1016/j.jclinepi.2024.111508","DOIUrl":"10.1016/j.jclinepi.2024.111508","url":null,"abstract":"<div><h3>Objectives</h3><div>The main purpose of using a surrogate endpoint is to estimate the treatment effect on the true endpoint sooner than with a true endpoint. Based on a metaregression of historical randomized trials with surrogate and true endpoints, we discuss statistics for applying and evaluating surrogate endpoints.</div></div><div><h3>Methods</h3><div>We computed statistics from 2 types of linear metaregressions for trial-level data: simple random effects and novel random effects with correlations among estimated treatment effects in trials with more than 2 arms. A key statistic is the estimated intercept of the metaregression line. An intercept that is small or not statistically significant increases confidence when extrapolating to a new treatment because of consistency with a single causal pathway and invariance to labeling of treatments as controls. For a regulator applying the metaregression to a new treatment, a useful statistic is the 95% prediction interval. For a clinical trialist planning a trial of a new treatment, useful statistics are the surrogate threshold effect proportion, the sample size multiplier adjusted for dropouts, and the novel true endpoint advantage.</div></div><div><h3>Results</h3><div>We illustrate these statistics with surrogate endpoint metaregressions involving antihypertension treatment, breast cancer screening, and colorectal cancer treatment.</div></div><div><h3>Conclusion</h3><div>Regulators and trialists should consider using these statistics when applying and evaluating surrogate endpoints.</div></div>","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"175 ","pages":"Article 111508"},"PeriodicalIF":7.3,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142121113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mohsen Sadatsafavi , Amir Khakban , Tima Mohammadi , Samir Gupta , Nick Bansback , the IMPACT Study Team
{"title":"Stakeholder-informed positivity thresholds for disease markers and risk scores: a methodological framework and an application in obstructive lung disease","authors":"Mohsen Sadatsafavi , Amir Khakban , Tima Mohammadi , Samir Gupta , Nick Bansback , the IMPACT Study Team","doi":"10.1016/j.jclinepi.2024.111509","DOIUrl":"10.1016/j.jclinepi.2024.111509","url":null,"abstract":"<div><h3>Objectives</h3><div>A positivity threshold is often applied to markers or predicted risks to guide disease management. These thresholds are often decided exclusively by clinical experts despite being sensitive to the preferences of patients and general public as ultimate stakeholders.</div></div><div><h3>Study Design and Setting</h3><div>We propose an analytical framework for quantifying the net benefit (NB) of an evidence-based positivity threshold based on combining preference-sensitive (eg, how individuals weight benefits and harms of treatment) and preference-agnostic (eg, the magnitude of benefit and the risk of harm) parameters. We propose parsimonious choice experiments to elicit preference-sensitive parameters from stakeholders, and targeted evidence synthesis to quantify the value of preference-agnostic parameters. We apply this framework to maintenance of azithromycin therapy for chronic obstructive pulmonary disease using a discrete choice experiment to estimate preference weights for attribute level associated with treatment. We identify the positivity threshold on 12-month moderate or severe exacerbation risk that would maximize the NB of treatment in terms of severe exacerbations avoided.</div></div><div><h3>Results</h3><div>In the case study, the prevention of moderate and severe exacerbations (benefits) and the risk of hearing loss and gastrointestinal symptoms (harms) emerged as important attributes. Four hundred seventy seven respondents completed the discrete choice experiment survey. Relative to each percent risk of severe exacerbation, preference weights for each percent risk of moderate exacerbation, hearing loss, and gastrointestinal symptoms were 0.395 (95% confidence interval [CI] 0.338–0.456), 1.180 (95% CI 1.071–1.201), and 0.253 (95% CI 0.207–0.299), respectively. The optimal threshold that maximized NB was to treat patients with a 12-month risk of moderate or severe exacerbations ≥12%.</div></div><div><h3>Conclusion</h3><div>The proposed methodology can be applied to many contexts where the objective is to devise positivity thresholds that need to incorporate stakeholder preferences. Applying this framework to chronic obstructive pulmonary disease pharmacotherapy resulted in a stakeholder-informed treatment threshold that was substantially lower than the implicit thresholds in contemporary guidelines.</div></div><div><h3>Plain Language Summary</h3><div>Doctors often compare disease markers (such as laboratory results) or risk scores for a patient with cut-off values from guidelines to decide which patients need to be treated. For example, guidelines recommend that patients whose 10-year risk of heart attack is more than 10% be given statin pills. However, guidelines that recommend such treatment rules might not consider what matters most to patients (like how much they do not like side effects of the drugs). In this study, we propose a mathematical method where preferences of individuals on the trade-off betw","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"175 ","pages":"Article 111509"},"PeriodicalIF":7.3,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0895435624002658/pdfft?md5=338502f05fab7320074c3310f0f3dcac&pid=1-s2.0-S0895435624002658-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142114470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiaoting Shi , Ziang Liu , Mingfeng Zhang , Wei Hua , Jie Li , Joo-Yeon Lee , Sai Dharmarajan , Kate Nyhan , Ashley Naimi , Timothy L. Lash , Molly M. Jeffery , Joseph S. Ross , Zeyan Liew , Joshua D. Wallach
{"title":"Quantitative bias analysis methods for summary-level epidemiologic data in the peer-reviewed literature: a systematic review","authors":"Xiaoting Shi , Ziang Liu , Mingfeng Zhang , Wei Hua , Jie Li , Joo-Yeon Lee , Sai Dharmarajan , Kate Nyhan , Ashley Naimi , Timothy L. Lash , Molly M. Jeffery , Joseph S. Ross , Zeyan Liew , Joshua D. Wallach","doi":"10.1016/j.jclinepi.2024.111507","DOIUrl":"10.1016/j.jclinepi.2024.111507","url":null,"abstract":"<div><h3>Objectives</h3><p>Quantitative bias analysis (QBA) methods evaluate the impact of biases arising from systematic errors on observational study results. This systematic review aimed to summarize the range and characteristics of QBA methods for summary-level data published in the peer-reviewed literature.</p></div><div><h3>Study Design and Setting</h3><p>We searched MEDLINE, Embase, Scopus, and Web of Science for English-language articles describing QBA methods. For each QBA method, we recorded key characteristics, including applicable study designs, bias(es) addressed; bias parameters, and publicly available software. The study protocol was preregistered on the Open Science Framework (<span><span>https://osf.io/ue6vm/</span><svg><path></path></svg></span>).</p></div><div><h3>Results</h3><p>Our search identified 10,249 records, of which 53 were articles describing 57 QBA methods for summary-level data. Of the 57 QBA methods, 53 (93%) were explicitly designed for observational studies, and 4 (7%) for meta-analyses. There were 29 (51%) QBA methods that addressed unmeasured confounding, 19 (33%) misclassification bias, 6 (11%) selection bias, and 3 (5%) multiple biases. Thirty-eight (67%) QBA methods were designed to generate bias-adjusted effect estimates and 18 (32%) were designed to describe how bias could explain away observed findings. Twenty-two (39%) articles provided code or online tools to implement the QBA methods.</p></div><div><h3>Conclusion</h3><p>In this systematic review, we identified a total of 57 QBA methods for summary-level epidemiologic data published in the peer-reviewed literature. Future investigators can use this systematic review to identify different QBA methods for summary-level epidemiologic data.</p></div><div><h3>Plain Language Summary</h3><p>Quantitative bias analysis (QBA) methods can be used to evaluate the impact of biases on observational study results. However, little is known about the full range and characteristics of available methods in the peer-reviewed literature that can be used to conduct QBA using information reported in manuscripts and other publicly available sources without requiring the raw data from a study. In this systematic review, we identified 57 QBA methods for summary-level data from observational studies. Overall, there were 29 methods that addressed unmeasured confounding, 19 that addressed misclassification bias, six that addressed selection bias, and three that addressed multiple biases. This systematic review may help future investigators identify different QBA methods for summary-level data.</p></div>","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"175 ","pages":"Article 111507"},"PeriodicalIF":7.3,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142094052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Baihui Yan , Min Li , Jiaxin Zhang , Hui Chang , Chi Ma , Fan Li
{"title":"Adherence to PRISMA-A and reporting was suboptimal in meta-analysis abstracts on drug efficacy for tumors: a literature survey","authors":"Baihui Yan , Min Li , Jiaxin Zhang , Hui Chang , Chi Ma , Fan Li","doi":"10.1016/j.jclinepi.2024.111506","DOIUrl":"10.1016/j.jclinepi.2024.111506","url":null,"abstract":"<div><h3>Objectives</h3><p>To assess the reporting of meta-analysis abstracts on drug efficacy for tumors in terms of adherence to Preferred Reporting Items for Systematic Reviews and Meta-analyses for Abstracts (PRISMA-A) and identify the potential factors associated with adherence to PRISMA-A.</p></div><div><h3>Study Design and Setting</h3><p>A total of 3,211 eligible meta-analysis abstracts were assessed using a checklist adapted from the PRISMA-A statement. Adherence to PRISMA-A was analyzed by the total PRISMA-A score and adherence rate (AR). The independent samples t-test was performed to compare the difference of the total scores between two groups with different characteristics, and the analysis of variance or Kruskal-Wallis test was used among multiple groups. The Pearson's correlation coefficient was used to measure the correlation between the word count and the total PRISMA-A score.</p></div><div><h3>Results</h3><p>The mean total score was 8.11 (±1.76) and the AR was 57.94%. The items with lower AR were funding (AR = 0.93%), registration (AR = 3.86%), and risk of bias (AR = 7.85%). Meta-analyses published after the release of PRISMA-A showed better adherence to PRISMA-A. Compared to unstructured abstracts, structured abstracts had a higher AR for each item in PRISMA-A. There was a positive correlation between the word count of abstract and the total PRISMA-A score (<em>r</em> = 0.358, <em>P</em> < .001).</p></div><div><h3>Conclusion</h3><p>Adherence to PRISMA-A was suboptimal in meta-analysis abstracts on drug efficacy for tumors, despite the improvement after the release of PRISMA-A. Various measures should be implemented to improve compliance with PRISMA-A and enhance the reporting of meta-analysis abstracts, including journal endorsement of PRISMA-A, requirement of stricter adherence to PRISMA-A, relaxation of abstract word limits, etc.</p></div><div><h3>Plain Language Summary</h3><p>Meta-analysis is the statistical method used to compare and synthesize the results of studies on the same result research problem. It is integral in guiding evidence-based decision making in clinical practice. However, crucial information is frequently inadequately documented in meta-analysis abstracts, thereby reducing their significance for readers. And there has been a lack of published research evaluating the reporting of meta-analysis abstracts in the field of drug efficacy for tumors. The objectives of our study were (1) to assess the reporting of meta-analysis abstracts on drug efficacy for tumors in terms of adherence to Preferred Reporting Items for Systematic Reviews and Meta-analyses for Abstracts (PRISMA-A); and (2) to identify factors that might influence adherence to PRISMA-A. Our study reveals that meta-analyses published after the release of PRISMA-A showed better adherence to PRISMA-A, although there is still large room for improvement. Compared to unstructured abstracts, structured abstracts received the higher adherence rate (AR) f","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"175 ","pages":"Article 111506"},"PeriodicalIF":7.3,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142047477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Brodie McGhie-Fraser , Aranka Ballering , Peter Lucassen , Caoimhe McLoughlin , Evelien Brouwers , Jon Stone , Tim olde Hartman , Sandra van Dulmen
{"title":"Validation of the Persistent Somatic Symptom Stigma Scale for Healthcare Professionals","authors":"Brodie McGhie-Fraser , Aranka Ballering , Peter Lucassen , Caoimhe McLoughlin , Evelien Brouwers , Jon Stone , Tim olde Hartman , Sandra van Dulmen","doi":"10.1016/j.jclinepi.2024.111505","DOIUrl":"10.1016/j.jclinepi.2024.111505","url":null,"abstract":"<div><h3>Objectives</h3><p>Persistent somatic symptoms (PSS) describe recurrent or continuously occurring symptoms such as fatigue, dizziness, or pain that have persisted for at least several months. These include single symptoms such as chronic pain, combinations of symptoms, or functional disorders such as fibromyalgia or irritable bowel syndrome. While many studies have explored stigmatisation by healthcare professionals toward people with PSS, there is a lack of validated measurement instruments. We recently developed a stigma scale, the Persistent Somatic Symptom Stigma scale for Healthcare Professionals (PSSS-HCP). The aim of this study is to evaluate the measurement properties (validity and reliability) and factor structure of the PSSS-HCP.</p></div><div><h3>Study Design and Setting</h3><p>The PSSS-HCP was tested with 121 healthcare professionals across the United Kingdom to evaluate its measurement properties. Analysis of the factor structure was conducted using principal component analysis. We calculated Cronbach's alpha to determine the internal consistency of each (sub)scale. Test-retest reliability was conducted with a subsample of participants with a 2-week interval. We evaluated convergent validity by testing the association between the PSSS-HCP and the Medical Condition Regard Scale (MCRS) and the influence of social desirability using the short form of the Marlowe-Crowne Social Desirability Scale (MCSDS).</p></div><div><h3>Results</h3><p>The PSSS-HCP showed sufficient internal consistency (Cronbach's alpha = 0.84) and sufficient test-retest reliability, intraclass correlation = 0.97 (95% CI 0.94–0.99, <em>P</em> < .001). Convergent validity was sufficient between the PSSS-HCP and the MCRS, and no relationship was found between the PSSS-HCP and the MCSDS. A three factor structure was identified (othering, uneasiness in interaction, non-disclosure) which accounted for 60.5% of the variance using 13 of the 19 tested items.</p></div><div><h3>Conclusion</h3><p>The PSSS-HCP can be used to measure PSS stigmatisation by healthcare professionals. The PSSS-HCP has demonstrated sufficient internal consistency, test-retest reliability, convergent validity and no evidence of social desirability bias. The PSSS-HCP has demonstrated potential to measure important aspects of stigma and provide a foundation for stigma reduction intervention evaluation.</p></div>","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"174 ","pages":"Article 111505"},"PeriodicalIF":7.3,"publicationDate":"2024-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0895435624002610/pdfft?md5=9b4cfcd16d876ea9ae68701e88d71ef3&pid=1-s2.0-S0895435624002610-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142005793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marcus Westerberg , Hans Garmo , David Robinson , Pär Stattin , Rolf Gedeborg
{"title":"Target trial emulation using new comorbidity indices provided risk estimates comparable to a randomized trial","authors":"Marcus Westerberg , Hans Garmo , David Robinson , Pär Stattin , Rolf Gedeborg","doi":"10.1016/j.jclinepi.2024.111504","DOIUrl":"10.1016/j.jclinepi.2024.111504","url":null,"abstract":"<div><h3>Objectives</h3><p>To quantify the ability of two new comorbidity indices to adjust for confounding, by benchmarking a target trial emulation against the randomized controlled trial (RCT) result.</p></div><div><h3>Study Design and Setting</h3><p>Observational study including 18,316 men from Prostate Cancer data Base Sweden 5.0, diagnosed with prostate cancer between 2008 and 2019 and treated with primary radical prostatectomy (RP, <em>n</em> = 14,379) or radiotherapy (RT, <em>n</em> = 3,937). The effect on adjusted risk of death from any cause after adjustment for comorbidity by use of two new comorbidity indices, the multidimensional diagnosis-based comorbidity index and the drug comorbidity index, were compared to adjustment for the Charlson comorbidity index (CCI).</p></div><div><h3>Results</h3><p>Risk of death was higher after RT than RP (hazard ratio [HR] = 1.94; 95% confidence interval [CI]: 1.70–2.21). The difference decreased when adjusting for age, cancer characteristics, and CCI (HR = 1.32, 95% CI: 1.06–1.66). Adjustment for the two new comorbidity indices further attenuated the difference (HR 1.14, 95% CI 0.91–1.44). Emulation of a hypothetical pragmatic trial where also older men with any type of baseline comorbidity were included, largely confirmed these results (HR 1.10; 95% CI 0.95–1.26).</p></div><div><h3>Conclusion</h3><p>Adjustment for comorbidity using two new indices provided comparable risk of death from any cause in line with results of a RCT. Similar results were seen in a broader study population, more representative of clinical practice.</p></div>","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"174 ","pages":"Article 111504"},"PeriodicalIF":7.3,"publicationDate":"2024-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0895435624002609/pdfft?md5=66bdbf3013b9d88ca0da75aec3fcdb62&pid=1-s2.0-S0895435624002609-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142005792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Haley K. Holmer , Suchitra Iyer , Celia V. Fiordalisi , Edi Kuhn , Mary L. Forte , M. Hassan Murad , Zhen Wang , Amy Y. Tsou , Jeremy J. Michel , Craig A. Umscheid
{"title":"Supplementing systematic review findings with healthcare system data: pilot projects from the Agency for Healthcare Research and Quality Evidence-based Practice Center program","authors":"Haley K. Holmer , Suchitra Iyer , Celia V. Fiordalisi , Edi Kuhn , Mary L. Forte , M. Hassan Murad , Zhen Wang , Amy Y. Tsou , Jeremy J. Michel , Craig A. Umscheid","doi":"10.1016/j.jclinepi.2024.111484","DOIUrl":"10.1016/j.jclinepi.2024.111484","url":null,"abstract":"<div><h3>Objectives</h3><p>The US Agency for Healthcare Research and Quality, through the Evidence-based Practice Center (EPC) Program, aims to provide health system decision makers with the highest-quality evidence to inform clinical decisions. However, limitations in the literature may lead to inconclusive findings in EPC systematic reviews (SRs). The EPC Program conducted pilot projects to understand the feasibility, benefits, and challenges of utilizing health system data to augment SR findings to support confidence in healthcare decision-making based on real-world experiences.</p></div><div><h3>Study Design and Setting</h3><p>Three contractors (each an EPC located at a different health system) selected a recently completed SR conducted by their center and identified an evidence gap that electronic health record (EHR) data might address. All pilot project topics addressed clinical questions as opposed to care delivery, care organization, or care disparities topics that are common in EPC reports. Topic areas addressed by each EPC included infantile epilepsy, migraine, and hip fracture. EPCs also tracked additional resources needed to conduct supplemental analyses. The workgroup met monthly in 2022-2023 to discuss challenges and lessons learned from the pilot projects.</p></div><div><h3>Results</h3><p>Two supplemental data analyses filled an evidence gap identified in the SRs (raised certainty of evidence, improved applicability) and the third filled a health system knowledge gap. Project challenges fell under three themes: regulatory and logistical issues, data collection and analysis, and interpretation and presentation of findings. Limited ability to capture key clinical variables given inconsistent or missing data within the EHR was a major limitation. The workgroup found that conducting supplemental data analysis alongside an SR was feasible but adds considerable time and resources to the review process (estimated total hours to complete pilot projects ranged from 283 to 595 across EPCs), and that the increased effort and resources added limited incremental value.</p></div><div><h3>Conclusion</h3><p>Supplementing existing SRs with analyses of EHR data is resource intensive and requires specialized skillsets throughout the process. While using EHR data for research has immense potential to generate real-world evidence and fill knowledge gaps, these data may not yet be ready for routine use alongside SRs.</p></div>","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"174 ","pages":"Article 111484"},"PeriodicalIF":7.3,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141890819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Miloslav Klugar , Tamara Lotfi , Andrea J. Darzi , Marge Reinap , Jitka Klugarová , Lucia Kantorová , Jun Xia , Romina Brignardello-Petersen , Andrea Pokorná , Glen Hazlewood , Zachary Munn , Rebecca L. Morgan , Ingrid Toews , Ignacio Neumann , Patraporn Bhatarasakoon , Airton Tetelbom Stein , Michael McCaul , Alexander G. Mathioudakis , Kristen E. D'Anci , Grigorios I. Leontiadis , Holger J. Schünemann
{"title":"GRADE guidance 39: using GRADE-ADOLOPMENT to adopt, adapt or create contextualized recommendations from source guidelines and evidence syntheses","authors":"Miloslav Klugar , Tamara Lotfi , Andrea J. Darzi , Marge Reinap , Jitka Klugarová , Lucia Kantorová , Jun Xia , Romina Brignardello-Petersen , Andrea Pokorná , Glen Hazlewood , Zachary Munn , Rebecca L. Morgan , Ingrid Toews , Ignacio Neumann , Patraporn Bhatarasakoon , Airton Tetelbom Stein , Michael McCaul , Alexander G. Mathioudakis , Kristen E. D'Anci , Grigorios I. Leontiadis , Holger J. Schünemann","doi":"10.1016/j.jclinepi.2024.111494","DOIUrl":"10.1016/j.jclinepi.2024.111494","url":null,"abstract":"<div><h3>Background and Objective</h3><p>The Grading of Recommendations, Assessment, Development and Evaluations (GRADE)-ADOLOPMENT methodology has been widely used to adopt, adapt, or de novo develop recommendations from existing or new guideline and evidence synthesis efforts. The objective of this guidance is to refine the operationalization for applying GRADE-ADOLOPMENT.</p></div><div><h3>Methods</h3><p>Through iterative discussions, online meetings, and email communications, the GRADE-ADOLOPMENT project group drafted the updated guidance. We then conducted a review of handbooks of guideline-producing organizations, and a scoping review of published and planned adolopment guideline projects. The lead authors refined the existing approach based on the scoping review findings and feedback from members of the GRADE working group. We presented the revised approach to the group in November 2022 (approximately 115 people), in May 2023 (approximately 100 people), and twice in September 2023 (approximately 60 and 90 people) for approval.</p></div><div><h3>Results</h3><p>This GRADE guidance shows how to effectively and efficiently contextualize recommendations using the GRADE-ADOLOPMENT approach by doing the following: (1) showcasing alternative pathways for starting an adolopment effort; (2) elaborating on the different essential steps of this approach, such as building on existing evidence-to-decision (EtDs), when available or developing new EtDs, if necessary; and (3) providing examples from adolopment case studies to facilitate the application of the approach. We demonstrate how to use contextual evidence to make judgments about EtD criteria, and highlight the importance of making the resulting EtDs available to facilitate adolopment efforts by others.</p></div><div><h3>Conclusion</h3><p>This updated GRADE guidance further operationalizes the application of GRADE-ADOLOPMENT based on over 6 years of experience. It serves to support uptake and application by end users interested in contextualizing recommendations to a local setting or specific reality in a short period of time or with limited resources.</p></div>","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"174 ","pages":"Article 111494"},"PeriodicalIF":7.3,"publicationDate":"2024-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141908300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}