A. E. Ades, Nicky J. Welton, Sofia Dias, David M. Phillippo, Deborah M. Caldwell
{"title":"Twenty years of network meta-analysis: Continuing controversies and recent developments","authors":"A. E. Ades, Nicky J. Welton, Sofia Dias, David M. Phillippo, Deborah M. Caldwell","doi":"10.1002/jrsm.1700","DOIUrl":"10.1002/jrsm.1700","url":null,"abstract":"<p>Network meta-analysis (NMA) is an extension of pairwise meta-analysis (PMA) which combines evidence from trials on multiple treatments in connected networks. NMA delivers internally consistent estimates of relative treatment efficacy, needed for rational decision making. Over its first 20 years NMA's use has grown exponentially, with applications in both health technology assessment (HTA), primarily re-imbursement decisions and clinical guideline development, and clinical research publications. This has been a period of transition in meta-analysis, first from its roots in educational and social psychology, where large heterogeneous datasets could be explored to find effect modifiers, to smaller pairwise meta-analyses in clinical medicine on average with less than six studies. This has been followed by narrowly-focused estimation of the effects of specific treatments at specific doses in specific populations in sparse networks, where direct comparisons are unavailable or informed by only one or two studies. NMA is a powerful and well-established technique but, in spite of the exponential increase in applications, doubts about the reliability and validity of NMA persist. Here we outline the continuing controversies, and review some recent developments. We suggest that heterogeneity should be minimized, as it poses a threat to the reliability of NMA which has not been fully appreciated, perhaps because it has not been seen as a problem in PMA. More research is needed on the extent of heterogeneity and inconsistency in datasets used for decision making, on formal methods for making recommendations based on NMA, and on the further development of multi-level network meta-regression.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"15 5","pages":"702-727"},"PeriodicalIF":5.0,"publicationDate":"2024-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/jrsm.1700","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139484797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Appropriateness of conducting and reporting random-effects meta-analysis in oncology","authors":"Jinma Ren, Jia Ma, Joseph C. Cappelleri","doi":"10.1002/jrsm.1702","DOIUrl":"10.1002/jrsm.1702","url":null,"abstract":"<p>A random-effects model is often applied in meta-analysis when considerable heterogeneity among studies is observed due to the differences in patient characteristics, timeframe, treatment regimens, and other study characteristics. Since 2014, the journals <i>Research Synthesis Methods</i> and the <i>Annals of Internal Medicine</i> have published a few noteworthy papers that explained why the most widely used method for pooling heterogeneous studies—the DerSimonian–Laird (DL) estimator—can produce biased estimates with falsely high precision and recommended to use other several alternative methods. Nevertheless, more than half of studies (55.7%) published in top oncology-specific journals during 2015–2022 did not report any detailed method in the random-effects meta-analysis. Of the studies that did report the methodology used, the DL method was still the dominant one reported. Thus, while the authors recommend that <i>Research Synthesis Methods</i> and the <i>Annals of Internal Medicine</i> continue to increase the publication of its articles that report on specific methods for handling heterogeneity and use random-effects estimates that provide more accurate confidence limits than the DL estimator, other journals that publish meta-analyses in oncology (and presumably in other disease areas) are urged to do the same on a much larger scale than currently documented.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"15 2","pages":"326-331"},"PeriodicalIF":9.8,"publicationDate":"2024-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139465478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Reem El Sherif, Pierre Pluye, Quan Nha Hong, Benoît Rihoux
{"title":"Using qualitative comparative analysis as a mixed methods synthesis in systematic mixed studies reviews: Guidance and a worked example","authors":"Reem El Sherif, Pierre Pluye, Quan Nha Hong, Benoît Rihoux","doi":"10.1002/jrsm.1698","DOIUrl":"10.1002/jrsm.1698","url":null,"abstract":"<p>Qualitative comparative analysis (QCA) is a hybrid method designed to bridge the gap between qualitative and quantitative research in a case-sensitive approach that considers each case holistically as a complex configuration of conditions and outcomes. QCA allows for multiple conjunctural causation, implying that it is often a combination of conditions that produces an outcome, that multiple pathways may lead to the same outcome, and that in different contexts, the same condition may have a different impact on the outcome. This approach to complexity allows QCA to provide a practical understanding for complex, real-world situations, and the context of implementing interventions. There are guides for conducting QCA in primary research and quantitative systematic reviews yet, to our knowledge, no guidance for conducting QCA in systematic mixed studies reviews (SMSRs). Thus, the specific objectives of this paper are to (1) describe a step-by-step approach for novice researchers for using QCA to integrate qualitative and quantitative evidence, including guidance on how to use software; (2) highlight specific challenges; (3) propose potential solutions from a worked example; and (4) provide recommendations for reporting.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"15 3","pages":"450-465"},"PeriodicalIF":9.8,"publicationDate":"2024-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/jrsm.1698","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139401167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Enhancing recall in automated record screening: A resampling algorithm","authors":"Zhipeng Hou, Elizabeth Tipton","doi":"10.1002/jrsm.1690","DOIUrl":"10.1002/jrsm.1690","url":null,"abstract":"<p>Literature screening is the process of identifying all relevant records from a pool of candidate paper records in systematic review, meta-analysis, and other research synthesis tasks. This process is time consuming, expensive, and prone to human error. Screening prioritization methods attempt to help reviewers identify most relevant records while only screening a proportion of candidate records with high priority. In previous studies, screening prioritization is often referred to as automatic literature screening or automatic literature identification. Numerous screening prioritization methods have been proposed in recent years. However, there is a lack of screening prioritization methods with reliable performance. Our objective is to develop a screening prioritization algorithm with reliable performance for practical use, for example, an algorithm that guarantees an 80% chance of identifying at least <span></span><math>\u0000 <mrow>\u0000 <mn>80</mn>\u0000 <mo>%</mo>\u0000 </mrow></math> of the relevant records. Based on a target-based method proposed in Cormack and Grossman, we propose a screening prioritization algorithm using sampling with replacement. The algorithm is a wrapper algorithm that can work with any current screening prioritization algorithm to guarantee the performance. We prove, with mathematics and probability theory, that the algorithm guarantees the performance. We also run numeric experiments to test the performance of our algorithm when applied in practice. The numeric experiment results show this algorithm achieve reliable performance under different circumstances. The proposed screening prioritization algorithm can be reliably used in real world research synthesis tasks.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"15 3","pages":"372-383"},"PeriodicalIF":9.8,"publicationDate":"2024-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/jrsm.1690","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139376893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hanan Khalil, Fiona Campbell, Katrina Danial, Danielle Pollock, Zachary Munn, Vivian Welsh, Ashrita Saran, Dimi Hoppe, Andrea C. Tricco
{"title":"Advancing the methodology of mapping reviews: A scoping review","authors":"Hanan Khalil, Fiona Campbell, Katrina Danial, Danielle Pollock, Zachary Munn, Vivian Welsh, Ashrita Saran, Dimi Hoppe, Andrea C. Tricco","doi":"10.1002/jrsm.1694","DOIUrl":"10.1002/jrsm.1694","url":null,"abstract":"<p>This scoping review aims to identify and systematically review published mapping reviews to assess their commonality and heterogeneity and determine whether additional efforts should be made to standardise methodology and reporting. The following databases were searched; Ovid MEDLINE, Embase, CINAHL, PsycINFO, Campbell collaboration database, Social Science Abstracts, Library and Information Science Abstracts (LISA). Following a pilot-test on a random sample of 20 citations included within title and abstracts, two team members independently completed all screening. Ten articles were piloted at full-text screening, and then each citation was reviewed independently by two team members. Discrepancies at both stages were resolved through discussion. Following a pilot-test on a random sample of five relevant full-text articles, one team member abstracted all the relevant data. Uncertainties in the data abstraction were resolved by another team member. A total of 335 articles were eligible for this scoping review and subsequently included. There was an increasing growth in the number of published mapping reviews over the years from 5 in 2010 to 73 in 2021. Moreover, there was a significant variability in reporting the included mapping reviews including their research question, priori protocol, methodology, data synthesis and reporting. This work has further highlighted the gaps in evidence synthesis methodologies. Further guidance developed by evidence synthesis organisations, such as JBI and Campbell, has the potential to clarify challenges experienced by researchers, given the magnitude of mapping reviews published every year.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"15 3","pages":"384-397"},"PeriodicalIF":9.8,"publicationDate":"2024-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/jrsm.1694","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139085188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Meta-analysis and partial correlation coefficients: A matter of weights","authors":"Sanghyun Hong, W. Robert Reed","doi":"10.1002/jrsm.1697","DOIUrl":"10.1002/jrsm.1697","url":null,"abstract":"<p>This study builds on the simulation framework of a recent paper by Stanley and Doucouliagos (<i>Research Synthesis Methods</i> 2023;14;515–519). S&D use simulations to make the argument that meta-analyses using partial correlation coefficients (PCCs) should employ a “suboptimal” estimator of the PCC standard error when constructing weights for fixed effect and random effects estimation. We address concerns that their simulations and subsequent recommendation may give meta-analysts a misleading impression. While the estimator they promote dominates the “correct” formula in their Monte Carlo framework, there are other estimators that perform even better. We conclude that more research is needed before best practice recommendations can be made for meta-analyses with PCCs.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"15 2","pages":"303-312"},"PeriodicalIF":9.8,"publicationDate":"2023-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/jrsm.1697","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139071569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jona Lilienthal, Sibylle Sturtz, Christoph Schürmann, Matthias Maiworm, Christian Röver, Tim Friede, Ralf Bender
{"title":"Bayesian random-effects meta-analysis with empirical heterogeneity priors for application in health technology assessment with very few studies","authors":"Jona Lilienthal, Sibylle Sturtz, Christoph Schürmann, Matthias Maiworm, Christian Röver, Tim Friede, Ralf Bender","doi":"10.1002/jrsm.1685","DOIUrl":"10.1002/jrsm.1685","url":null,"abstract":"<p>In Bayesian random-effects meta-analysis, the use of weakly informative prior distributions is of particular benefit in cases where only a few studies are included, a situation often encountered in health technology assessment (HTA). Suggestions for empirical prior distributions are available in the literature but it is unknown whether these are adequate in the context of HTA. Therefore, a database of all relevant meta-analyses conducted by the Institute for Quality and Efficiency in Health Care (IQWiG, Germany) was constructed to derive empirical prior distributions for the heterogeneity parameter suitable for HTA. Previously, an extension to the normal-normal hierarchical model had been suggested for this purpose. For different effect measures, this extended model was applied on the database to conservatively derive a prior distribution for the heterogeneity parameter. Comparison of a Bayesian approach using the derived priors with IQWiG's current standard approach for evidence synthesis shows favorable properties. Therefore, these prior distributions are recommended for future meta-analyses in HTA settings and could be embedded into the IQWiG evidence synthesis approach in the case of very few studies.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"15 2","pages":"275-287"},"PeriodicalIF":9.8,"publicationDate":"2023-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/jrsm.1685","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139047915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andres Jung, Tobias Braun, Susan Armijo-Olivo, Dimitris Challoumas, Kerstin Luedtke
{"title":"Consensus on the definition and assessment of external validity of randomized controlled trials: A Delphi study","authors":"Andres Jung, Tobias Braun, Susan Armijo-Olivo, Dimitris Challoumas, Kerstin Luedtke","doi":"10.1002/jrsm.1688","DOIUrl":"10.1002/jrsm.1688","url":null,"abstract":"<p>External validity is an important parameter that needs to be considered for decision making in health research, but no widely accepted measurement tool for the assessment of external validity of randomized controlled trials (RCTs) exists. One of the most limiting factors for creating such a tool is probably the substantial heterogeneity and lack of consensus in this field. The objective of this study was to reach consensus on a definition of external validity and on criteria to assess the external validity of RCTs included in systematic reviews. A three-round online Delphi study was conducted. The development of the Delphi survey was based on findings from a previous systematic review. Potential panelists were identified through a comprehensive web search. Consensus was reached when at least 67% of the panelists agreed to a proposal. Eighty-four panelists from different countries and various disciplines participated in at least one round of this study. Consensus was reached on the definition of external validity (“External validity is the extent to which results of trials provide an acceptable basis for generalization to other circumstances such as variations in populations, settings, interventions, outcomes, or other relevant contextual factors”), and on 14 criteria to assess the external validity of RCTs in systematic reviews. The results of this Delphi study provide a consensus-based reference standard for future tool development. Future research should focus on adapting, pilot testing, and validating these criteria to develop measurement tools for the assessment of external validity.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"15 2","pages":"288-302"},"PeriodicalIF":9.8,"publicationDate":"2023-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/jrsm.1688","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139037243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lena Schmidt, Saleh Mohamed, Nick Meader, Jaume Bacardit, Dawn Craig
{"title":"Automated data analysis of unstructured grey literature in health research: A mapping review","authors":"Lena Schmidt, Saleh Mohamed, Nick Meader, Jaume Bacardit, Dawn Craig","doi":"10.1002/jrsm.1692","DOIUrl":"10.1002/jrsm.1692","url":null,"abstract":"<p>The amount of grey literature and ‘softer’ intelligence from social media or websites is vast. Given the long lead-times of producing high-quality peer-reviewed health information, this is causing a demand for new ways to provide prompt input for secondary research. To our knowledge, this is the first review of automated data extraction methods or tools for health-related grey literature and soft data, with a focus on (semi)automating horizon scans, health technology assessments (HTA), evidence maps, or other literature reviews. We searched six databases to cover both health- and computer-science literature. After deduplication, 10% of the search results were screened by two reviewers, the remainder was single-screened up to an estimated 95% sensitivity; screening was stopped early after screening an additional 1000 results with no new includes. All full texts were retrieved, screened, and extracted by a single reviewer and 10% were checked in duplicate. We included 84 papers covering automation for health-related social media, internet fora, news, patents, government agencies and charities, or trial registers. From each paper, we extracted data about important functionalities for users of the tool or method; information about the level of support and reliability; and about practical challenges and research gaps. Poor availability of code, data, and usable tools leads to low transparency regarding performance and duplication of work. Financial implications, scalability, integration into downstream workflows, and meaningful evaluations should be carefully planned before starting to develop a tool, given the vast amounts of data and opportunities those tools offer to expedite research.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"15 2","pages":"178-197"},"PeriodicalIF":9.8,"publicationDate":"2023-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/jrsm.1692","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138794448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Elizabeth Brisco, Elena Kulinskaya, Julia Koricheva
{"title":"Assessment of temporal instability in the applied ecology and conservation evidence base","authors":"Elizabeth Brisco, Elena Kulinskaya, Julia Koricheva","doi":"10.1002/jrsm.1691","DOIUrl":"10.1002/jrsm.1691","url":null,"abstract":"<p>Outcomes of meta-analyses are increasingly used to inform evidence-based decision making in various research fields. However, a number of recent studies have reported rapid temporal changes in magnitude and significance of the reported effects which could make policy-relevant recommendations from meta-analyses to quickly go out of date. We assessed the extent and patterns of temporal trends in magnitude and statistical significance of the cumulative effects in meta-analyses in applied ecology and conservation published between 2004 and 2018. Of the 121 meta-analyses analysed, 93% showed a temporal trend in cumulative effect magnitude or significance with 27% of the datasets exhibiting temporal trends in both. The most common trend was the early study effect when at least one of the first 5 years effect size estimates exhibited more than 50% magnitude difference to the subsequent estimate. The observed temporal trends persisted in majority of datasets once moderators were accounted for. Only 5 datasets showed significant changes in sample size over time which could potentially explain the observed temporal change in the cumulative effects. Year of publication of meta-analysis had no significant effect on presence of temporal trends in cumulative effects. Our results show that temporal changes in magnitude and statistical significance in applied ecology are widespread and represent a serious potential threat to use of meta-analyses for decision-making in conservation and environmental management. We recommend use of cumulative meta-analyses and call for more studies exploring the causes of the temporal effects.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"15 3","pages":"398-412"},"PeriodicalIF":9.8,"publicationDate":"2023-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/jrsm.1691","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138794446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}