{"title":"Four alternative methodologies for simulated treatment comparison: How could the use of simulation be re-invigorated?","authors":"Landan Zhang, Sylwia Bujkiewicz, Dan Jackson","doi":"10.1002/jrsm.1681","DOIUrl":"10.1002/jrsm.1681","url":null,"abstract":"<p>Simulated treatment comparison (STC) is an established method for performing population adjustment for the indirect comparison of two treatments, where individual patient data (IPD) are available for one trial but only aggregate level information is available for the other. The most commonly used method is what we call ‘standard STC’. Here we fit an outcome model using data from the trial with IPD, and then substitute mean covariate values from the trial where only aggregate level data are available, to predict what the first of these trial's outcomes would have been if its population had been the same as the second. However, this type of STC methodology does not involve simulation and can result in bias when the link function used in the outcome model is non-linear. An alternative approach is to use the fitted outcome model to simulate patient profiles in the trial for which IPD are available, but in the other trial's population. This stochastic alternative presents additional challenges. We examine the history of STC and propose two new simulation-based methods that resolve many of the difficulties associated with the current stochastic approach. A virtue of the simulation-based STC methods is that the marginal estimands are then clearly targeted. We illustrate all methods using a numerical example and explore their use in a simulation study.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"15 2","pages":"227-241"},"PeriodicalIF":9.8,"publicationDate":"2023-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138714810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"How trace plots help interpret meta-analysis results","authors":"Christian Röver, David Rindskopf, Tim Friede","doi":"10.1002/jrsm.1693","DOIUrl":"10.1002/jrsm.1693","url":null,"abstract":"<p>The trace plot is seldom used in meta-analysis, yet it is a very informative plot. In this article, we define and illustrate what the trace plot is, and discuss why it is important. The Bayesian version of the plot combines the posterior density of <span></span><math>\u0000 <mrow>\u0000 <mi>τ</mi>\u0000 </mrow></math>, the between-study standard deviation, and the shrunken estimates of the study effects as a function of <span></span><math>\u0000 <mrow>\u0000 <mi>τ</mi>\u0000 </mrow></math>. With a small or moderate number of studies, <span></span><math>\u0000 <mrow>\u0000 <mi>τ</mi>\u0000 </mrow></math> is not estimated with much precision, and parameter estimates and shrunken study effect estimates can vary widely depending on the correct value of <span></span><math>\u0000 <mrow>\u0000 <mi>τ</mi>\u0000 </mrow></math>. The trace plot allows visualization of the sensitivity to <span></span><math>\u0000 <mrow>\u0000 <mi>τ</mi>\u0000 </mrow></math> along with a plot that shows which values of <span></span><math>\u0000 <mrow>\u0000 <mi>τ</mi>\u0000 </mrow></math> are plausible and which are implausible. A comparable frequentist or empirical Bayes version provides similar results. The concepts are illustrated using examples in meta-analysis and meta-regression; implementation in <span>R</span> is facilitated in a Bayesian or frequentist framework using the <span>bayesmeta</span> and <span>metafor</span> packages, respectively.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"15 3","pages":"413-429"},"PeriodicalIF":9.8,"publicationDate":"2023-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/jrsm.1693","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138715481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A study of search strategy availability statements and sharing practices for systematic reviews: Ask and you might receive","authors":"Christine J. Neilson, Zahra Premji","doi":"10.1002/jrsm.1696","DOIUrl":"10.1002/jrsm.1696","url":null,"abstract":"<p>The literature search underpins data collection for all systematic reviews (SRs). The SR reporting guideline PRISMA, and its extensions, aim to facilitate research transparency and reproducibility, and ultimately improve the quality of research, by instructing authors to provide specific research materials and data upon publication of the manuscript. Search strategies are one item of data that are explicitly included in PRISMA and the critical appraisal tool AMSTAR2. Yet some authors use search availability statements implying that the search strategies are available upon request instead of providing strategies up front. We sought out reviews with search availability statements, characterized them, and requested the search strategies from authors via email. Over half of the included reviews cited PRISMA but less than a third included any search strategies. After requesting the strategies via email as instructed, we received replies from 46% of authors, and eventually received at least one search strategy from 36% of authors. Requesting search strategies via email has a low chance of success. Ask and you might receive—but you probably will not. SRs that do not make search strategies available are low quality at best according to AMSTAR2; Journal editors can and should enforce the requirement for authors to include their search strategies alongside their SR manuscripts.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"15 3","pages":"441-449"},"PeriodicalIF":9.8,"publicationDate":"2023-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/jrsm.1696","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138714809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sean McGrath, XiaoFei Zhao, Omer Ozturk, Stephan Katzenschlager, Russell Steele, Andrea Benedetti
{"title":"metamedian: An R package for meta-analyzing studies reporting medians","authors":"Sean McGrath, XiaoFei Zhao, Omer Ozturk, Stephan Katzenschlager, Russell Steele, Andrea Benedetti","doi":"10.1002/jrsm.1686","DOIUrl":"10.1002/jrsm.1686","url":null,"abstract":"<p>When performing an aggregate data meta-analysis of a continuous outcome, researchers often come across primary studies that report the sample median of the outcome. However, standard meta-analytic methods typically cannot be directly applied in this setting. In recent years, there has been substantial development in statistical methods to incorporate primary studies reporting sample medians in meta-analysis, yet there are currently no comprehensive software tools implementing these methods. In this paper, we present the <b>metamedian</b> R package, a freely available and open-source software tool for meta-analyzing primary studies that report sample medians. We summarize the main features of the software and illustrate its application through real data examples involving risk factors for a severe course of COVID-19.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"15 2","pages":"332-346"},"PeriodicalIF":9.8,"publicationDate":"2023-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138569212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Methods for using Bing's AI-powered search engine for data extraction for a systematic review","authors":"James Edward Hill, Catherine Harris, Andrew Clegg","doi":"10.1002/jrsm.1689","DOIUrl":"10.1002/jrsm.1689","url":null,"abstract":"<p>Data extraction is a time-consuming and resource-intensive task in the systematic review process. Natural language processing (NLP) artificial intelligence (AI) techniques have the potential to automate data extraction saving time and resources, accelerating the review process, and enhancing the quality and reliability of extracted data. In this paper, we propose a method for using Bing AI and Microsoft Edge as a second reviewer to verify and enhance data items first extracted by a single human reviewer. We describe a worked example of the steps involved in instructing the Bing AI Chat tool to extract study characteristics as data items from a PDF document into a table so that they can be compared with data extracted manually. We show that this technique may provide an additional verification process for data extraction where there are limited resources available or for novice reviewers. However, it should not be seen as a replacement to already established and validated double independent data extraction methods without further evaluation and verification. Use of AI techniques for data extraction in systematic reviews should be transparently and accurately described in reports. Future research should focus on the accuracy, efficiency, completeness, and user experience of using Bing AI for data extraction compared with traditional methods using two or more reviewers independently.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"15 2","pages":"347-353"},"PeriodicalIF":9.8,"publicationDate":"2023-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/jrsm.1689","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138561187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Danielle Pollock, Timothy Hugh Barker, Jennifer C Stone, Edoardo Aromataris, Miloslav Klugar, Anna M Scott, Cindy Stern, Amanda Ross-White, Ashley Whitehorn, Rick Wiechula, Larissa Shamseer, Zachary Munn
{"title":"Predatory journals and their practices present a conundrum for systematic reviewers and evidence synthesisers of health research: A qualitative descriptive study","authors":"Danielle Pollock, Timothy Hugh Barker, Jennifer C Stone, Edoardo Aromataris, Miloslav Klugar, Anna M Scott, Cindy Stern, Amanda Ross-White, Ashley Whitehorn, Rick Wiechula, Larissa Shamseer, Zachary Munn","doi":"10.1002/jrsm.1684","DOIUrl":"10.1002/jrsm.1684","url":null,"abstract":"<p>Predatory journals are a blemish on scholarly publishing and academia and the studies published within them are more likely to contain data that is false. The inclusion of studies from predatory journals in evidence syntheses is potentially problematic due to this propensity for false data to be included. To date, there has been little exploration of the opinions and experiences of evidence synthesisers when dealing with predatory journals in the conduct of their evidence synthesis. In this paper, the thoughts, opinions, and attitudes of evidence synthesisers towards predatory journals and the inclusion of studies published within these journals in evidence syntheses were sought. Focus groups were held with participants who were experienced evidence synthesisers from JBI (previously the Joanna Briggs Institute) collaboration. Utilising qualitative content analysis, two generic categories were identified: predatory journals within evidence synthesis, and predatory journals within academia. Our findings suggest that evidence synthesisers believe predatory journals are hard to identify and that there is no current consensus on the management of these studies if they have been included in an evidence synthesis. There is a critical need for further research, education, guidance, and development of clear processes to assist evidence synthesisers in the management of studies from predatory journals.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"15 2","pages":"257-274"},"PeriodicalIF":9.8,"publicationDate":"2023-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/jrsm.1684","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138476387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jennifer L. Proper, Haitao Chu, Purvi Prajapati, Michael D. Sonksen, Thomas A. Murray
{"title":"Network meta analysis to predict the efficacy of an approved treatment in a new indication","authors":"Jennifer L. Proper, Haitao Chu, Purvi Prajapati, Michael D. Sonksen, Thomas A. Murray","doi":"10.1002/jrsm.1683","DOIUrl":"10.1002/jrsm.1683","url":null,"abstract":"<p>Drug repurposing refers to the process of discovering new therapeutic uses for existing medicines. Compared to traditional drug discovery, drug repurposing is attractive for its speed, cost, and reduced risk of failure. However, existing approaches for drug repurposing involve complex, computationally-intensive analytical methods that are not widely used in practice. Instead, repurposing decisions are often based on subjective judgments from limited empirical evidence. In this article, we develop a novel Bayesian network meta-analysis (NMA) framework that can predict the efficacy of an approved treatment in a new indication and thereby identify candidate treatments for repurposing. We obtain predictions using two main steps: first, we use standard NMA modeling to estimate average relative effects from a network comprised of treatments studied in both indications in addition to one treatment studied in only one indication. Then, we model the correlation between relative effects using various strategies that differ in how they model treatments across indications and within the same drug class. We evaluate the predictive performance of each model using a simulation study and find that the model minimizing root mean squared error of the posterior median for the candidate treatment depends on the amount of available data, the level of correlation between indications, and whether treatment effects differ, on average, by drug class. We conclude by discussing an illustrative example in psoriasis and psoriatic arthritis and find that the candidate treatment has a high probability of success in a future trial.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"15 2","pages":"242-256"},"PeriodicalIF":9.8,"publicationDate":"2023-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138476375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hans-Peter Piepho, Johannes Forkman, Waqas Ahmed Malik
{"title":"A REML method for the evidence-splitting model in network meta-analysis","authors":"Hans-Peter Piepho, Johannes Forkman, Waqas Ahmed Malik","doi":"10.1002/jrsm.1679","DOIUrl":"10.1002/jrsm.1679","url":null,"abstract":"<p>Checking for possible inconsistency between direct and indirect evidence is an important task in network meta-analysis. Recently, an evidence-splitting (ES) model has been proposed, that allows separating direct and indirect evidence in a network and hence assessing inconsistency. A salient feature of this model is that the variance for heterogeneity appears in both the mean and the variance structure. Thus, full maximum likelihood (ML) has been proposed for estimating the parameters of this model. Maximum likelihood is known to yield biased variance component estimates in linear mixed models, and this problem is expected to also affect the ES model. The purpose of the present paper, therefore, is to propose a method based on residual (or restricted) maximum likelihood (REML). Our simulation shows that this new method is quite competitive to methods based on full ML in terms of bias and mean squared error. In addition, some limitations of the ES model are discussed. While this model splits direct and indirect evidence, it is not a plausible model for the cause of inconsistency.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"15 2","pages":"198-212"},"PeriodicalIF":9.8,"publicationDate":"2023-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/jrsm.1679","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138456722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Simon Briscoe, Rebecca Abbott, Hassanat Lawal, Morwenna Rogers, Liz Shaw, Jo Thompson Coon
{"title":"Adapting how to use Google Search to identify studies for systematic reviews in view of a recent change to how search results are displayed","authors":"Simon Briscoe, Rebecca Abbott, Hassanat Lawal, Morwenna Rogers, Liz Shaw, Jo Thompson Coon","doi":"10.1002/jrsm.1687","DOIUrl":"10.1002/jrsm.1687","url":null,"abstract":"","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"15 1","pages":"175-176"},"PeriodicalIF":9.8,"publicationDate":"2023-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138456723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Appraisal methods and outcomes of AMSTAR 2 assessments in overviews of systematic reviews of interventions in the cardiovascular field: A methodological study","authors":"Paschalis Karakasis, Konstantinos I. Bougioukas, Konstantinos Pamporis, Nikolaos Fragakis, Anna-Bettina Haidich","doi":"10.1002/jrsm.1680","DOIUrl":"10.1002/jrsm.1680","url":null,"abstract":"<p>This study aimed to assess the methods and outcomes of The Measurement Tool to Assess systematic Reviews (AMSTAR) 2 appraisals in overviews of reviews (overviews) of interventions in the cardiovascular field and identify factors that are associated with these outcomes. MEDLINE, Scopus, and the Cochrane Database of Systematic Reviews were searched until November 2022. Eligible were overviews of cardiovascular interventions, analyzing systematic reviews (SRs) of randomized controlled trials (RCTs). Extracted data included characteristics of overviews and SRs and AMSTAR 2 appraisal methods and outcomes. Data were synthesized using descriptive statistics and logistic regression to explore potential associations between the characteristics of SRs and extracted AMSTAR 2 overall ratings (“High-Moderate” vs. “Low-Critically low”). The original results on individual AMSTAR 2 items were entered into the official AMSTAR 2 online tool and the recalculated overall confidence ratings were compared to those provided in overviews. All 34 overviews identified were published between 2019 and 2022. Rating of overall confidence following the algorithm suggested by AMSTAR 2 developers was noted in 74% of overviews. The 679 unique included SRs were mainly of “Critically low” (53%) or “Low” (18.7%) confidence and underperformed in items 2 (Protocol, no = 65.2%) and 7 (List of excluded studies, no = 84%). The following characteristics of SRs were significantly associated with higher overall ratings: Cochrane origin, pharmacological interventions, including exclusively RCTs, citation of methodological and reporting guidelines, protocol, absence of funding and publication after AMSTAR 2 release. Generally, overviews' authors tended to deviate from the original rating scheme and ascribe higher ratings to SRs compared to the official AMSTAR 2 online tool. Most SRs included in overviews of cardiovascular interventions have critically low or low confidence in their results. Overviews' authors should be more transparent about the methods used to derive the overall confidence in SRs.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"15 2","pages":"213-226"},"PeriodicalIF":9.8,"publicationDate":"2023-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"92152061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}