Miriam Hattle, Joie Ensor, Katie Scandrett, Marienke van Middelkoop, Danielle A. van der Windt, Melanie A. Holden, Richard D. Riley
{"title":"Individual participant data meta-analysis to examine linear or non-linear treatment-covariate interactions at multiple time-points for a continuous outcome","authors":"Miriam Hattle, Joie Ensor, Katie Scandrett, Marienke van Middelkoop, Danielle A. van der Windt, Melanie A. Holden, Richard D. Riley","doi":"10.1002/jrsm.1750","DOIUrl":"10.1002/jrsm.1750","url":null,"abstract":"<p>Individual participant data (IPD) meta-analysis projects obtain, harmonise, and synthesise original data from multiple studies. Many IPD meta-analyses of randomised trials are initiated to identify treatment effect modifiers at the individual level, thus requiring statistical modelling of interactions between treatment effect and participant-level covariates. Using a two-stage approach, the interaction is estimated in each trial separately and combined in a meta-analysis. In practice, two complications often arise with continuous outcomes: examining non-linear relationships for continuous covariates and dealing with multiple time-points. We propose a two-stage multivariate IPD meta-analysis approach that summarises non-linear treatment-covariate interaction functions at multiple time-points for continuous outcomes. A set-up phase is required to identify a small set of time-points; relevant knot positions for a spline function, at identical locations in each trial; and a common reference group for each covariate. Crucially, the multivariate approach can include participants or trials with missing outcomes at some time-points. In the first stage, restricted cubic spline functions are fitted and their interaction with each discrete time-point is estimated in each trial separately. In the second stage, the parameter estimates defining these multiple interaction functions are jointly synthesised in a multivariate random-effects meta-analysis model accounting for within-trial and across-trial correlation. These meta-analysis estimates define the summary non-linear interactions at each time-point, which can be displayed graphically alongside confidence intervals. The approach is illustrated using an IPD meta-analysis examining effect modifiers for exercise interventions in osteoarthritis, which shows evidence of non-linear relationships and small gains in precision by analysing all time-points jointly.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"15 6","pages":"1001-1016"},"PeriodicalIF":5.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/jrsm.1750","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142249384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aidan C. Tan, Angela C. Webster, Sol Libesman, Zijing Yang, Rani R. Chand, Weber Liu, Talia Palacios, Kylie E. Hunter, Anna Lene Seidler
{"title":"Data sharing policies across health research globally: Cross-sectional meta-research study","authors":"Aidan C. Tan, Angela C. Webster, Sol Libesman, Zijing Yang, Rani R. Chand, Weber Liu, Talia Palacios, Kylie E. Hunter, Anna Lene Seidler","doi":"10.1002/jrsm.1757","DOIUrl":"10.1002/jrsm.1757","url":null,"abstract":"<div>\u0000 \u0000 \u0000 <section>\u0000 \u0000 <h3> Background</h3>\u0000 \u0000 <p>Data sharing improves the value, synthesis, and integrity of research, but rates are low. Data sharing might be improved if data sharing policies were prominent and actionable at every stage of research. We aimed to systematically describe the epidemiology of data sharing policies across the health research lifecycle.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Methods</h3>\u0000 \u0000 <p>This was a cross-sectional analysis of the data sharing policies of the largest health research funders, all national ethics committees, all clinical trial registries, the highest-impact medical journals, and all medical research data repositories. Stakeholders' official websites, online reports, and other records were reviewed up to May 2022. The strength and characteristics of their data sharing policies were assessed, including their policies on data sharing intention statements (a.k.a. data accessibility statements) and on data sharing specifically for coronavirus disease studies. Data were manually extracted in duplicate, and policies were descriptively analysed by their stakeholder and characteristics.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Results</h3>\u0000 \u0000 <p>Nine hundred and thirty-five eligible stakeholders were identified: 110 funders, 124 ethics committees, 18 trial registries, 273 journals, and 410 data repositories. Data sharing was required by 41% (45/110) of funders, no ethics committees or trial registries, 19% (52/273) of journals and 6% (24/410) of data repositories. Among funder types, a higher proportion of private (63%, 35/55) and philanthropic (67%, 4/6) funders required data sharing than public funders (12%, 6/49).</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Conclusion</h3>\u0000 \u0000 <p>Data sharing requirements, and even recommendations, were insufficient across health research. Where data sharing was required or recommended, there was limited guidance on implementation. We describe multiple pathways to improve the implementation of data sharing. Public funders and ethics committees are two stakeholders with particularly important untapped opportunities.</p>\u0000 </section>\u0000 </div>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"15 6","pages":"1060-1071"},"PeriodicalIF":5.0,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/jrsm.1757","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142249382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alejandro Sandoval-Lentisco, José A. López-López, Julio Sánchez-Meca
{"title":"Frequency of use of the revised Cochrane Risk of Bias tool (RoB 2) in Cochrane and non-Cochrane systematic reviews published in 2023 and 2024","authors":"Alejandro Sandoval-Lentisco, José A. López-López, Julio Sánchez-Meca","doi":"10.1002/jrsm.1755","DOIUrl":"10.1002/jrsm.1755","url":null,"abstract":"","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"15 6","pages":"1244-1245"},"PeriodicalIF":5.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142192538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ferdinand Valentin Stoye, Claudia Tschammler, Oliver Kuss, Annika Hoyer
{"title":"A discrete time-to-event model for the meta-analysis of full ROC curves","authors":"Ferdinand Valentin Stoye, Claudia Tschammler, Oliver Kuss, Annika Hoyer","doi":"10.1002/jrsm.1753","DOIUrl":"10.1002/jrsm.1753","url":null,"abstract":"<p>The development of new statistical models for the meta-analysis of diagnostic test accuracy studies is still an ongoing field of research, especially with respect to summary receiver operating characteristic (ROC) curves. In the recently published updated version of the “<i>Cochrane Handbook for Systematic Reviews of Diagnostic Test Accuracy</i>”, the authors point to the challenges of this kind of meta-analysis and propose two approaches. However, both of them come with some disadvantages, such as the nonstraightforward choice of priors in Bayesian models or the requirement of a two-step approach where parameters are estimated for the individual studies, followed by summarizing the results. As an alternative, we propose a novel model by applying methods from time-to-event analysis. To this task we use the discrete proportional hazard approach to treat the different diagnostic thresholds, that provide means to estimate sensitivity and specificity and are reported by the single studies, as categorical variables in a generalized linear mixed model, using both the logit- and the asymmetric cloglog-link. This leads to a model specification with threshold-specific discrete hazards, avoiding a linear dependency between thresholds, discrete hazard, and sensitivity/specificity and thus increasing model flexibility. We compare the resulting models to approaches from the literature in a simulation study. While the estimated area under the summary ROC curve is estimated comparably well in most approaches, the results depict substantial differences in the estimated sensitivities and specificities. We also show the practical applicability of the models to data from a meta-analysis for the screening of type 2 diabetes.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"15 6","pages":"1031-1048"},"PeriodicalIF":5.0,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/jrsm.1753","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142138824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Robert C. Lorenz, Mirjam Jenny, Anja Jacobs, Katja Matthias
{"title":"Fast-and-frugal decision tree for the rapid critical appraisal of systematic reviews","authors":"Robert C. Lorenz, Mirjam Jenny, Anja Jacobs, Katja Matthias","doi":"10.1002/jrsm.1754","DOIUrl":"10.1002/jrsm.1754","url":null,"abstract":"<p>Conducting high-quality overviews of reviews (OoR) is time-consuming. Because the quality of systematic reviews (SRs) varies, it is necessary to critically appraise SRs when conducting an OoR. A well-established appraisal tool is A Measurement Tool to Assess Systematic Reviews (AMSTAR) 2, which takes about 15–32 min per application. To save time, we developed two fast-and-frugal decision trees (FFTs) for assessing the methodological quality of SR for OoR either during the full-text screening stage (Screening FFT) or to the resulting pool of SRs (Rapid Appraisal FFT). To build a data set for developing the FFT, we identified published AMSTAR 2 appraisals. Overall confidence ratings of the AMSTAR 2 were used as a criterion and the 16 items as cues. One thousand five hundred and nineteen appraisals were obtained from 24 publications and divided into training and test data sets. The resulting Screening FFT consists of three items and correctly identifies all non-critically low-quality SRs (sensitivity of 100%), but has a positive predictive value of 59%. The three-item Rapid Appraisal FFT correctly identifies 80% of the high-quality SRs and correctly identifies 97% of the low-quality SRs, resulting in an accuracy of 95%. The FFTs require about 10% of the 16 AMSTAR 2 items. The Screening FFT may be applied during full-text screening to exclude SRs with critically low quality. The Rapid Appraisal FFT may be applied to the final SR pool to identify SR that might be of high methodological quality.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"15 6","pages":"1049-1059"},"PeriodicalIF":5.0,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/jrsm.1754","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142131404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Narrative reanalysis: A methodological framework for a new brand of reviews","authors":"Steven Hall, Erin Leeder","doi":"10.1002/jrsm.1751","DOIUrl":"10.1002/jrsm.1751","url":null,"abstract":"<p>In response to the evolving needs of knowledge synthesis, this manuscript introduces the concept of narrative reanalysis, a method that refines data from initial reviews, such as systematic and reviews, to focus on specific sub-phenomena. Unlike traditional narrative reviews, which lack the methodological rigor of systematic reviews and are broader in scope, our methodological framework for narrative reanalysis applies a structured, systematic framework to the interpretation of existing data. This approach enables a focused investigation of nuanced topics within a broader dataset, enhancing understanding and generating new insights. We detail a five-stage methodological framework that guides the narrative reanalysis process: (1) retrieval of an initial review, (2) identification and justification of a sub-phenomenon, (3) expanded search, selection, and extraction of data, (4) reanalyzing the sub-phenomenon, and (5) writing the report. The proposed framework aims to standardize narrative reanalysis, advocating for its use in academic and research settings to foster more rigorous and insightful literature reviews. This approach bridges the methodological gap between narrative and systematic reviews, offering a valuable tool for researchers to explore detailed aspects of broader topics without the extensive resources required for systematic reviews.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"15 6","pages":"1017-1030"},"PeriodicalIF":5.0,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/jrsm.1751","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142131405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Zero- and few-shot prompting of generative large language models provides weak assessment of risk of bias in clinical trials","authors":"Simon Šuster, Timothy Baldwin, Karin Verspoor","doi":"10.1002/jrsm.1749","DOIUrl":"10.1002/jrsm.1749","url":null,"abstract":"<p>Existing systems for automating the assessment of risk-of-bias (RoB) in medical studies are supervised approaches that require substantial training data to work well. However, recent revisions to RoB guidelines have resulted in a scarcity of available training data. In this study, we investigate the effectiveness of generative large language models (LLMs) for assessing RoB. Their application requires little or no training data and, if successful, could serve as a valuable tool to assist human experts during the construction of systematic reviews. Following Cochrane's latest guidelines (RoB2) designed for human reviewers, we prepare instructions that are fed as input to LLMs, which then infer the risk associated with a trial publication. We distinguish between two modelling tasks: directly predicting RoB2 from text; and employing decomposition, in which a RoB2 decision is made after the LLM responds to a series of signalling questions. We curate new testing data sets and evaluate the performance of four general- and medical-domain LLMs. The results fall short of expectations, with LLMs seldom surpassing trivial baselines. On the direct RoB2 prediction test set (<i>n</i> = 5993), LLMs perform akin to the baselines (F1: 0.1–0.2). In the decomposition task setup (<i>n</i> = 28,150), similar F1 scores are observed. Our additional comparative evaluation on RoB1 data also reveals results substantially below those of a supervised system. This testifies to the difficulty of solving this task based on (complex) instructions alone. Using LLMs as an assisting technology for assessing RoB2 thus currently seems beyond their reach.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"15 6","pages":"988-1000"},"PeriodicalIF":5.0,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/jrsm.1749","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142034719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kylie E. Hunter, Mason Aberoumand, Sol Libesman, James X. Sotiropoulos, Jonathan G. Williams, Wentao Li, Jannik Aagerup, Ben W. Mol, Rui Wang, Angie Barba, Nipun Shrestha, Angela C. Webster, Anna Lene Seidler
{"title":"Development of the individual participant data integrity tool for assessing the integrity of randomised trials using individual participant data","authors":"Kylie E. Hunter, Mason Aberoumand, Sol Libesman, James X. Sotiropoulos, Jonathan G. Williams, Wentao Li, Jannik Aagerup, Ben W. Mol, Rui Wang, Angie Barba, Nipun Shrestha, Angela C. Webster, Anna Lene Seidler","doi":"10.1002/jrsm.1739","DOIUrl":"10.1002/jrsm.1739","url":null,"abstract":"<div>\u0000 \u0000 \u0000 <section>\u0000 \u0000 <p>Increasing integrity concerns in medical research have prompted the development of tools to detect untrustworthy studies. Existing tools primarily assess published aggregate data (AD), though scrutiny of individual participant data (IPD) is often required to detect trustworthiness issues. Thus, we developed the IPD Integrity Tool for detecting integrity issues in randomised trials with IPD available. This manuscript describes the development of this tool. We conducted a literature review to collate and map existing integrity items. These were discussed with an expert advisory group; agreed items were included in a standardised tool and automated where possible. We piloted this tool in two IPD meta-analyses (including 116 trials) and conducted preliminary validation checks on 13 datasets with and without known integrity issues. We identified 120 integrity items: 54 could be conducted using AD, 48 required IPD, and 18 were possible with AD, but more comprehensive with IPD. An initial reduced tool was developed through consensus involving 13 advisors, featuring 11 AD items across four domains, and 12 IPD items across eight domains. The tool was iteratively refined throughout piloting and validation. All studies with known integrity issues were accurately identified during validation. The final tool includes seven AD domains with 13 items and eight IPD domains with 18 items. The quality of evidence informing healthcare relies on trustworthy data. We describe the development of a tool to enable researchers, editors, and others to detect integrity issues using IPD. Detailed instructions for its application are published as a complementary manuscript in this issue.</p>\u0000 </section>\u0000 </div>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"15 6","pages":"940-949"},"PeriodicalIF":5.0,"publicationDate":"2024-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/jrsm.1739","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141999012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Maxi Schulz, Malte Kramer, Oliver Kuss, Tim Mathes
{"title":"A re-analysis of about 60,000 sparse data meta-analyses suggests that using an adequate method for pooling matters","authors":"Maxi Schulz, Malte Kramer, Oliver Kuss, Tim Mathes","doi":"10.1002/jrsm.1748","DOIUrl":"10.1002/jrsm.1748","url":null,"abstract":"<p>In sparse data meta-analyses (with few trials or zero events), conventional methods may distort results. Although better-performing one-stage methods have become available in recent years, their implementation remains limited in practice. This study examines the impact of using conventional methods compared to one-stage models by re-analysing meta-analyses from the Cochrane Database of Systematic Reviews in scenarios with zero event trials and few trials. For each scenario, we computed one-stage methods (Generalised linear mixed model [GLMM], Beta-binomial model [BBM], Bayesian binomial-normal hierarchical model using a weakly informative prior [BNHM-WIP]) and compared them with conventional methods (Peto-Odds-ratio [PETO], DerSimonian-Laird method [DL] for zero event trials; DL, Paule-Mandel [PM], Restricted maximum likelihood [REML] method for few trials). While all methods showed similar treatment effect estimates, substantial variability in statistical precision emerged. Conventional methods generally resulted in smaller confidence intervals (CIs) compared to one-stage models in the zero event situation. In the few trials scenario, the CI lengths were widest for the BBM on average and significance often changed compared to the PM and REML, despite the relatively wide CIs of the latter. In agreement with simulations and guidelines for meta-analyses with zero event trials, our results suggest that one-stage models are preferable. The best model can be either selected based on the data situation or, using a method that can be used in various situations. In the few trial situation, using BBM and additionally PM or REML for sensitivity analyses appears reasonable when conservative results are desired. Overall, our results encourage careful method selection.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"15 6","pages":"978-987"},"PeriodicalIF":5.0,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/jrsm.1748","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141970255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Checking the inventory: Illustrating different methods for individual participant data meta-analytic structural equation modeling","authors":"Lennert J. Groot, Kees-Jan Kan, Suzanne Jak","doi":"10.1002/jrsm.1735","DOIUrl":"10.1002/jrsm.1735","url":null,"abstract":"<p>Researchers may have at their disposal the raw data of the studies they wish to meta-analyze. The goal of this study is to identify, illustrate, and compare a range of possible analysis options for researchers to whom raw data are available, wanting to fit a structural equation model (SEM) to these data. This study illustrates techniques that directly analyze the raw data, such as multilevel and multigroup SEM, and techniques based on summary statistics, such as correlation-based meta-analytical structural equation modeling (MASEM), discussing differences in procedures, capabilities, and outcomes. This is done by analyzing a previously published collection of datasets using open source software. A path model reflecting the theory of planned behavior is fitted to these datasets using different techniques involving SEM. Apart from differences in handling of missing data, the ability to include study-level moderators, and conceptualization of heterogeneity, results show differences in parameter estimates and standard errors across methods. Further research is needed to properly formulate guidelines for applied researchers looking to conduct individual participant data MASEM.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"15 6","pages":"872-895"},"PeriodicalIF":5.0,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/jrsm.1735","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141974697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}