Arsenio Páez, David Nunan, Peter McCulloch, David Beard
{"title":"The influence of intervention fidelity on treatment effect estimates in clinical trials of complex interventions: A meta-epidemiological study.","authors":"Arsenio Páez, David Nunan, Peter McCulloch, David Beard","doi":"10.1016/j.jclinepi.2024.111610","DOIUrl":"https://doi.org/10.1016/j.jclinepi.2024.111610","url":null,"abstract":"<p><strong>Background: </strong>Randomized clinical trials (RCT) provide the most reliable estimates of treatment effectiveness for therapeutic interventions. However, flaws in their design and conduct may bias treatment effect estimates, leading to over or underestimation of the true intervention effect. This is especially relevant for complex interventions, such as those in rehabilitation, which are multifaceted and tailored for individual patients or providers, leading to variations in delivery and treatment effects.</p><p><strong>Objective: </strong>To assess whether poor intervention fidelity, the faithfulness of the intervention delivered in an RCT to what was intended in the trial protocol, influences (biases) estimates of treatment effects derived from meta-analysis of rehabilitation RCTs.</p><p><strong>Methods: </strong>In this meta-epidemiological study of 19 meta-analyses and 204 RCTs published between 2010-2020, we evaluated the difference in intervention effects between RCTs in which intervention fidelity was monitored and those in which it was absent. We also conducted random-effects meta-regression to measure associations between intervention fidelity, risk of bias (ROB), study sample size, and treatment effect estimates.</p><p><strong>Results: </strong>There was a linear relationship between fidelity and treatment effect sizes across RCTs, even after adjusting for ROB and study sample size. Higher degrees of fidelity were associated with smaller but more precise treatment effect estimates (d= -0.23 95% CI: -0.38, -0.74). Lower or absent fidelity was associated with larger, less precise estimates. Adjusting for fidelity reduced pooled treatment effect estimates in 4 meta-analyses from moderate to small or from small to no negligible or no effect, highlighting how poor fidelity can bias meta-analyses' results.</p><p><strong>Conclusion: </strong>Poor or absent intervention fidelity in RCTs may lead to overestimation of observed treatment effects, skewing the conclusions from individuals studies and systematic reviews with meta-analyses when pooled. Caution is needed when interpreting the results of complex intervention RCTs when fidelity is not monitored or is monitored but not reported.</p>","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":" ","pages":"111610"},"PeriodicalIF":7.3,"publicationDate":"2024-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142631895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rubens Vidal, Margreth Grotle, Marianne Bakke Johnsen, Louis Yvernay, Jan Hartvigsen, Raymond Ostelo, Lise Grethe Kjønø, Christian Lindtveit Enstad, Rikke Munk Killingmo, Einar Henjum Halsnes, Guilherme H D Grande, Crystian B Oliveira
{"title":"Prediction models for outcomes in people with low back pain receiving conservative treatment: a systematic review.","authors":"Rubens Vidal, Margreth Grotle, Marianne Bakke Johnsen, Louis Yvernay, Jan Hartvigsen, Raymond Ostelo, Lise Grethe Kjønø, Christian Lindtveit Enstad, Rikke Munk Killingmo, Einar Henjum Halsnes, Guilherme H D Grande, Crystian B Oliveira","doi":"10.1016/j.jclinepi.2024.111593","DOIUrl":"https://doi.org/10.1016/j.jclinepi.2024.111593","url":null,"abstract":"<p><strong>Objective: </strong>To identify, critically appraise and evaluate the performance measures of the available prediction models for outcomes in people with low back pain (LBP) receiving conservative treatment.</p><p><strong>Study design and settings: </strong>In this systematic review, literature searches were conducted in Embase, Medline, CINAHL from their inception until February/2024. Studies containing follow-up assessment (e.g., prospective cohort studies, registry-based studies) investigating prediction models of outcomes (e.g., pain intensity and disability) for people with LBP receiving conservative treatment were included. Two independent reviewers performed the study selection, the data extraction using the Checklist for critical Appraisal and data extraction for systematic Reviews of prediction Modelling Studies (CHARMS), and risk of bias assessment using the Prediction model Risk of Bias Assessment (PROBAST). Findings of individual studies were reported narratively taking into account the discrimination and calibration measures of the prediction models.</p><p><strong>Results: </strong>Seventy-five studies developing or investigating the validity of 216 models were included in this review. Most prediction models investigated people receiving physiotherapy treatment and most models included socio-demographic variables, clinical features, and self-reported measures as predictors. The discriminatory capacity of the internal validity of the 27 prediction models for pain intensity varied greatly showing a c-statistic ranging from 0.48 to 0.94. Similarly, the discriminatory capacity for 31 models for disability had the same pattern showing a c-statistic ranging from 0.48 to 0.86. The calibration measures of the internal validity of the prediction models predicting pain intensity and disability showed to be adequate. Only one of three studies testing the external validity of models to predict pain intensity and disability and reported both discrimination and calibration measures, which showed to be inadequate. The prediction models predicting the secondary outcomes (e.g., self-reported recovery, quality of life, return to work) showed varied performance measures for internal validity, and only two studies tested the external validity of models although they did not provide performance the performance measures.</p><p><strong>Conclusion: </strong>Several prediction models have been developed for people with LBP receiving conservative treatment, however most show inadequate discriminatory validity. A few studies externally validated the prediction models and future studies should focus on testing this before implementing in clinical practice.</p>","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":" ","pages":"111593"},"PeriodicalIF":7.3,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142631695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Simon L Turner, Amalia Karahalios, Elizabeth Korevaar, Joanne E McKenzie
{"title":"The Banksia plot: a method for visually comparing point estimates and confidence intervals across datasets.","authors":"Simon L Turner, Amalia Karahalios, Elizabeth Korevaar, Joanne E McKenzie","doi":"10.1016/j.jclinepi.2024.111591","DOIUrl":"https://doi.org/10.1016/j.jclinepi.2024.111591","url":null,"abstract":"<p><strong>Objective: </strong>In research evaluating statistical analysis methods, a common aim is to compare point estimates and confidence intervals (CIs) calculated from different analyses. This can be challenging when the outcomes (and their scale ranges) differ across datasets. We therefore developed a graphical method, the 'Banksia plot', to facilitate pairwise comparisons of different statistical analysis methods by plotting and comparing point estimates and confidence intervals from each analysis method both within and across datasets.</p><p><strong>Study design and setting: </strong>The plot is constructed in three stages. Stage 1: To compare results of two statistical analysis methods, for each dataset, the point estimate from the reference analysis method is centred on zero, and its confidence limits are scaled to range from -0.5 to 0.5. The same centring and scale adjustment values are then applied to the corresponding comparator analysis point estimate and confidence limits. Stage 2: A Banksia plot is constructed by plotting the centred and scaled point estimates from the comparator method for each dataset on a rectangle centred at zero, ranging from -0.5 to 0.5, which represents the reference method results. Stage 3: Optionally, a matrix of Banksia plots is graphed, showing all pairwise comparisons from multiple analysis methods. We illustrate the Banksia plot using two examples.</p><p><strong>Results: </strong>Illustration of the Banksia plot demonstrates how the plot makes immediately apparent whether there are differences in point estimates and confidence intervals when using different analysis methods (example 1), or different data extractors (example 2). Furthermore, we demonstrate how different bases for ordering the confidence intervals can be used to highlight particular differences (i.e. in point estimates, or confidence interval widths).</p><p><strong>Conclusion: </strong>The Banksia plot provides a visual summary of pairwise comparisons of different analysis methods, allowing patterns and trends in the point estimates and confidence intervals to be easily identified.</p>","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":" ","pages":"111591"},"PeriodicalIF":7.3,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142631710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Annika Dohmen, Alexander Obbarius, Milan Kock, Sandra Nolte, Christopher J Sidey-Gibbons, Jose M Valderas, Jens Rohde, Kathrin Rieger, Felix Fischer, Ulrich Keilholz, Matthias Rose, Christoph Paul Klapproth
{"title":"The EORTC QLU-C10D distinguished better between cancer patients and the general population than PROPr and EQ-5D-5L in a cross-sectional study.","authors":"Annika Dohmen, Alexander Obbarius, Milan Kock, Sandra Nolte, Christopher J Sidey-Gibbons, Jose M Valderas, Jens Rohde, Kathrin Rieger, Felix Fischer, Ulrich Keilholz, Matthias Rose, Christoph Paul Klapproth","doi":"10.1016/j.jclinepi.2024.111592","DOIUrl":"https://doi.org/10.1016/j.jclinepi.2024.111592","url":null,"abstract":"<p><strong>Objective: </strong>Health state utility (HSU) instruments for calculating quality-adjusted life years, such as the EORTC QLU-C10D, the PROMIS Preference Score (PROPr) and the EQ-5D-5L, yield different HSU values due to different modelling and different underlying descriptive scales. E.g. the QLU-C10D includes cancer-relevant dimensions such as nausea. This study aimed to investigate how these differences in descriptive scales contribute to differences in HSU scores by comparing scores of cancer patients receiving chemotherapy to those of the general population.</p><p><strong>Study design and setting: </strong>EORTC QLU-C10D, PROPr, and EQ-5D-5L scores were obtained for a convenience sample of 484 outpatients of the Department of Oncology, Charité - Universitätsmedizin Berlin, Germany. Convergent and known-groups validity were assessed using Pearson's correlation, intraclass correlation coefficients. We assessed each descriptive dimension score's discriminatory power and compared them to those of the general population (n>1,000) using effect size (ES; Cohen's d) and area under the curve (AUC).</p><p><strong>Results: </strong>Mean scores of QLU-C10D (0.64; 95%CI 0.62-0.67), PROPr (0.38; 95%CI 0.36-0.40), and EQ-5D-5L (0.72; 95%CI 0.70-0.75) differed significantly, irrespective of sociodemographic factors, condition, or treatment. Conceptually similar descriptive scores as obtained from the HSU instruments showed varying degrees of discrimination in terms of ES and AUC between patients and the general population. The QLU-C10D and its dimensions showed the largest ES and AUC.</p><p><strong>Conclusion: </strong>The QLU-C10D and its domains distinguished best between health states of the two populations, compared to the PROPr and EQ-5D-5L. As the EORTC QLQ-C30 is widely used in clinical practice, its data is available for economic evaluation.</p>","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":" ","pages":"111592"},"PeriodicalIF":7.3,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142631711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Statistical noise in PD-(L)1 inhibitor trials: Unravelling the durable-responder effect.","authors":"Michael Coory, Susan J Jordan","doi":"10.1016/j.jclinepi.2024.111589","DOIUrl":"https://doi.org/10.1016/j.jclinepi.2024.111589","url":null,"abstract":"<p><strong>Background: </strong>Programmed-death-1/ligand-1 inhibitors (PD-1/L1i's) have emerged as pivotal treatments for many cancers. A notable feature of this class of medicines is the dichotomous response pattern: A small (but clinically-relevant) percentage of patients (5% - 20%) benefit from deep and durable responses resembling functional cures (durable responders), while most patients experience only a modest or negligible response. Accurately predicting durable responders remains elusive due to the lack of a reliable biomarker. Another notable feature of these medicines is that different PD-1/L1's have obtained statistically significant results, leading to marketing approval, for some cancer indications, but not for others, with no discernible pattern. These puzzling inconsistencies have generated extensive discussions among oncologists. Proposed (but not entirely convincing) explanations include true underlying differences in efficacy for some types of cancer, but not others; or subtle differences in trial design.</p><p><strong>Objective: </strong>To investigate a less-explored hypothesis-the durable-responder effect: An initially unidentified group of durable responders generates more statistical noise than anticipated, leading to low-powered randomised controlled trials (RCTs) that report randomly variable results.</p><p><strong>Study design: </strong>Employing simulation, this investigation divides participants in PD-(L)1i RCTs into two groups: durable responders and patients with a more modest response. Drawing on published data for melanoma, lung and urothelial cancers, multiple pre-specified scenarios are replicated 50,000 times, systematically varying the durable-responder percentage from 5% to 20% and the modest-response hazard ratio for overall survival [HR(OS)] from 0.8 to 1.0. This allowed evaluation of the effect of durable responders on power, point estimates of the treatment effect for OS, and the probability of a misleading signal for harm.</p><p><strong>Results: </strong>When the treatment effect for the modest responders is similar to the comparator arm, statistical power remains below 80%, limiting the ability to reliably detect durable responders. Conversely, there is a material probability of obtaining a statistically significant result that exaggerates the treatment effect by chance. For instance, with an average HR(OS) of 0.93 (corresponding to 5% durable responders), statistically significant trials (7.2%) show an average HR(OS) of 0.77. Additionally, when 5% are durable responders, there is a 20% probability that the HR(OS) will exceed 1.0-suggesting potential harm, when none exists.</p><p><strong>Conclusion: </strong>This paper adds to the possible explanations for the puzzlingly inconsistent results from PD-(L)1i RCTs. Initially unidentified durable responders introduce features typical of imprecise, low-powered studies: a propensity for false-negative results; estimates of benefit that might not replicate; an","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":" ","pages":"111589"},"PeriodicalIF":7.3,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142592089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Use of participant data and biological samples is insufficiently described in participant information leaflets.","authors":"Emer R McGrath, Nigel Kirby, Frances Shiely","doi":"10.1016/j.jclinepi.2024.111590","DOIUrl":"https://doi.org/10.1016/j.jclinepi.2024.111590","url":null,"abstract":"<p><strong>Background and objectives: </strong>With greater availability of participant data and biobank repositories following clinical trial completion, adequately describing future data and biological sample re-use plans to trial participants is increasingly important. We evaluated how trial teams currently describe current and future use of participant data and biological samples in participant information leaflets (PILs).</p><p><strong>Methods: </strong>Retrospective qualitative analysis of 240 PILs (182 clinical trials) in Ireland and the UK. Descriptions of data and sample use/re-use were extracted and analysed using a four-stage pragmatic content analysis approach. A recommended list of questions to be addressed by trial teams when designing PILs was developed.</p><p><strong>Results: </strong>Of the 240 included PILs, 85% specifically mentioned, or directly implied, how confidentiality of participant data would be maintained; 38% were considered by the authors to adequately describe how data confidentiality would be maintained (i.e. the PIL specifically mentioned data deidentification and compliance with data protection regulations); 47% reported the intended duration of data storage (mean 15; SD ±9 years); 40% specified if data would be used in future research studies and 28% stated if data would be shared with other researchers. Of the 117 PILs stating biological samples would be collected from participants, 80% provided a reason for requesting the sample, 66% stated whether stored samples would be deidentified, 21% specified if individual-level results would be made available to participants and 70% specified whether samples may be used for future studies. Of the 73 PILs specifying planned future sample storage, 18% stated the intended duration of storage and 48% specified if samples would be shared with other researchers. A list of eight recommended questions to be addressed by trial teams when designing PILs were identified, e.g. 'What is the intended duration of data and sample storage for the current study?'.</p><p><strong>Conclusion: </strong>PILs often provide insufficient detail regarding plans for current use and future re-use of participants' data and their biological samples. The majority do not adequately describe plans for maintaining data confidentiality. Best practice approaches to describing data use and re-use in PILs are needed. This will require multi-stakeholder input, including potential trial participants to progress this.</p>","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":" ","pages":"111590"},"PeriodicalIF":7.3,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142592093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Philippe Autier, Karsten Juhl Jørgensen, Henrik Støvring
{"title":"Answers to comments by Jonas Schmidt, Casper Urth Pedersen, and Sisse Helle Njor.","authors":"Philippe Autier, Karsten Juhl Jørgensen, Henrik Støvring","doi":"10.1016/j.jclinepi.2024.111588","DOIUrl":"10.1016/j.jclinepi.2024.111588","url":null,"abstract":"","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":" ","pages":"111588"},"PeriodicalIF":7.3,"publicationDate":"2024-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142565172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jonas Schmidt, Casper Urth Pedersen, Sisse Helle Njor
{"title":"The methods adopted by Autier et al do not support their conclusions.","authors":"Jonas Schmidt, Casper Urth Pedersen, Sisse Helle Njor","doi":"10.1016/j.jclinepi.2024.111587","DOIUrl":"10.1016/j.jclinepi.2024.111587","url":null,"abstract":"","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":" ","pages":"111587"},"PeriodicalIF":7.3,"publicationDate":"2024-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142565174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Meera Viswanathan, Nila A Sathe, Vivian Welch, Damian K Francis, Patricia C Heyn, Rania Ali, Tiffany Duque, Elizabeth A Terhune, Jennifer S Lin, Ana Beatriz Pizarro, Dru Riddle
{"title":"Centering racial health equity in systematic reviews-paper 1: introduction to the series.","authors":"Meera Viswanathan, Nila A Sathe, Vivian Welch, Damian K Francis, Patricia C Heyn, Rania Ali, Tiffany Duque, Elizabeth A Terhune, Jennifer S Lin, Ana Beatriz Pizarro, Dru Riddle","doi":"10.1016/j.jclinepi.2024.111577","DOIUrl":"10.1016/j.jclinepi.2024.111577","url":null,"abstract":"<p><strong>Objectives: </strong>Systematic reviews hold immense promise as tools to highlight evidence-based practices that can reduce or aim to eliminate racial health disparities. Currently, consensus on centering racial health equity in systematic reviews and other evidence synthesis products is lacking. Centering racial health equity implies concentrating or focusing attention on health equity in ways that bring attention to the perspectives or needs of groups that are typically marginalized.</p><p><strong>Study design and setting: </strong>This Cochrane US Network team and colleagues, with the guidance of a steering committee, sought to understand the views of varied interest holders through semistructured interviews and conducted evidence syntheses addressing (1) definitions of racial health equity, (2) logic models and frameworks to centering racial health equity, (3) interventions to reduce racial health inequities, and (4) interest holder engagement in evidence syntheses. Our methods and teams include a primarily American and Canadian lens; however, findings and insights derived from this work are applicable to any region in which racial or ethnic discrimination and disparities in care due to structural causes exist.</p><p><strong>Results: </strong>In this series, we explain why centering racial health equity matters and what gaps exist and may need to be prioritized. The interviews and systematic reviews identified numerous gaps to address racial health equity that require changes not merely to evidence synthesis practices but also to the underlying evidence ecosystem. These changes include increasing representation, establishing foundational guidance (on definitions and causal mechanisms and models, building a substantive evidence base on racial health equity, strengthening methods guidance, disseminating and implementing results, and sustaining new practices).</p><p><strong>Conclusion: </strong>Centering racial health equity requires consensus on the part of key interest holders. As part of the next steps in building consensus, the manifold gaps identified by this series of papers need to be prioritized. Given the resource constraints, changes in norms around systematic reviews are most likely to occur when evidence-based standards for success are clearly established and the benefits of centering racial health equity are apparent.</p><p><strong>Plain language summary: </strong>Racial categories are not based on biology, but racism has negative biological effects. People from racial or ethnic minority groups have often been left out of research and ignored in systematic reviews. Systematic reviews often help clinicians and policymakers with evidence-based decisions. Centering racial health equity in systematic reviews will help clinicians and policymakers to improve outcomes for people from racial or ethnic minority groups. We conducted interviews and a series of four systematic reviews on definitions, logic models and frameworks, meth","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":" ","pages":"111577"},"PeriodicalIF":7.3,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142548827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Networks of interventions with no closed loops are conceptually limited as a source of evidence.","authors":"Rafael Leite Pacheco, Rachel Riera","doi":"10.1016/j.jclinepi.2024.111584","DOIUrl":"10.1016/j.jclinepi.2024.111584","url":null,"abstract":"","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":" ","pages":"111584"},"PeriodicalIF":7.3,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142559369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}