Claire Stansfield, Hossein Dehdarirad, James Thomas, Silvy Mathew, Alison O'Mara-Eves
{"title":"Analyzing the Utility of Openalex to Identify Studies for Systematic Reviews: Methods and a Case Study","authors":"Claire Stansfield, Hossein Dehdarirad, James Thomas, Silvy Mathew, Alison O'Mara-Eves","doi":"10.1002/cesm.70038","DOIUrl":"https://doi.org/10.1002/cesm.70038","url":null,"abstract":"<p>Open access scholarly resources have potential to simplify the literature search process, support more equitable access to research knowledge, and reduce biases from lack of access to relevant literature. OpenAlex is the world's largest open access database of academic research. However, it is not known whether OpenAlex is suitable for comprehensively identifying research for systematic reviews. We present an approach to measure the utility of OpenAlex as part of undertaking a systematic review, and present findings in the context of undertaking a systematic map on the implementation of diabetic eye screening. Procedures were developed to investigate OpenAlex's content coverage and capture, focusing on: (1) availability of relevant research records; (2) retrieval of relevant records from a Boolean search of OpenAlex (3) retrieval of relevant records from combining a PubMed Boolean search with a citations and related-items search of OpenAlex, and (4) efficient estimation of relevant records not identified elsewhere. The searches were conducted in July 2024 and repeated in March 2025 following removal of certain closed access abstracts from the OpenAlex data set. The original systematic review searches yielded 131 relevant records and 128 (98%) of these are present in OpenAlex. OpenAlex Boolean searches retrieved 126 (96%) of the 131 records, and partial screening yielded two relevant records not previously known to the review team. Retrieval was reduced to 123 (94%) when the searches were repeated in March 2025. However, the volume of records from the OpenAlex Boolean search was considerably greater than assessed for the original systematic map. Combining a Boolean search from PubMed and OpenAlex network graph searches yielded 93% recall. It is feasible and useful to investigate the use of OpenAlex as a key information resource for health topics. This approach can be modified to investigate OpenAlex for other systematic reviews. However, the volume of records obtained from searches is larger than that obtained from conventional sources, something that could be reduced using machine learning. Further investigations are needed, and our approach replicated in other reviews.</p>","PeriodicalId":100286,"journal":{"name":"Cochrane Evidence Synthesis and Methods","volume":"3 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cesm.70038","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144688187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dr. Olumide Adisa, Ms. Katie Tyrrell, Dr. Katherine Allen
{"title":"Enhancing nursing and other healthcare professionals' knowledge of childhood sexual abuse through self-assessment: A realist review","authors":"Dr. Olumide Adisa, Ms. Katie Tyrrell, Dr. Katherine Allen","doi":"10.1002/cesm.70019","DOIUrl":"https://doi.org/10.1002/cesm.70019","url":null,"abstract":"<div>\u0000 \u0000 \u0000 <section>\u0000 \u0000 <h3> Aim</h3>\u0000 \u0000 <p>To explore how child sexual abuse/exploitation (CSA/E) self-assessment tools are being used to enhance healthcare professionals' knowledge and confidence.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Background</h3>\u0000 \u0000 <p>Child sexual abuse/exploitation is common and associated with lifelong health impacts. In particular, nurses are well-placed to facilitate disclosures by adult survivors of child sexual abuse/exploitation and promote timely access to support. However, research shows that many are reluctant to enquire about abuse and feel underprepared for disclosures. Self-assessment provides a participatory method for evaluating competencies and identifying areas that need improvement.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Evaluation</h3>\u0000 \u0000 <p>Researchers adopted a realist synthesis approach, searching relevant databases for healthcare professionals' self-assessment tools/protocols relevant to adult survivors. In total, researchers reviewed 247 full-text articles. Twenty-five items met the criteria for data extraction, and to assess relevant contexts (C), mechanisms (M) and outcomes (O) were identified and mapped. Eight of these were included in the final synthesis based on papers that identified two key ‘families’ of abuse-related self-assessment interventions for healthcare contexts: PREMIS, a validated survey instrument to assess HCP knowledge, confidence and practice about domestic violence and abuse (DVA); Trauma-informed practice/care (TIP/C) organisational self-assessment protocols. Two revised programme theories were formulated: (1). Individual self-assessment can promote organisational accountability; and (2). Organisational self-assessment can increase the coherence and sustainability of changes in practice.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Conclusions</h3>\u0000 \u0000 <p>There is a lack of self-assessment tools/protocols designed to improve healthcare professionals' knowledge and confidence. Our review contributes to the evidence base on improving healthcare responses to CSA/E survivors, illustrating that self-assessment tools or protocols designed to improve HCP responses to adult survivors of CSA/E remain underdeveloped and under-studied. Refined programme theories developed during synthesis regarding DVA and TIP/C-related tools or protocols suggest areas for CSA/E-specific future research with stakeholders and service users.</p>\u0000 </section>\u0000 </div>","PeriodicalId":100286,"journal":{"name":"Cochrane Evidence Synthesis and Methods","volume":"3 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cesm.70019","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144681594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Helms Andersen, T. M. Marcussen, A. D. Termannsen, T. W. H. Lawaetz, O. Nørgaard
{"title":"Using Artificial Intelligence Tools as Second Reviewers for Data Extraction in Systematic Reviews: A Performance Comparison of Two AI Tools Against Human Reviewers","authors":"T. Helms Andersen, T. M. Marcussen, A. D. Termannsen, T. W. H. Lawaetz, O. Nørgaard","doi":"10.1002/cesm.70036","DOIUrl":"https://doi.org/10.1002/cesm.70036","url":null,"abstract":"<div>\u0000 \u0000 \u0000 <section>\u0000 \u0000 <h3> Background</h3>\u0000 \u0000 <p>Systematic reviews are essential but time-consuming and expensive. Large language models (LLMs) and artificial intelligence (AI) tools could potentially automate data extraction, but no comprehensive workflow has been tested for different review types.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Objective</h3>\u0000 \u0000 <p>To evaluate Elicit's and ChatGPT's abilities to extract data from journal articles as a replacement for one of two human data extractors in systematic reviews.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Methods</h3>\u0000 \u0000 <p>Human-extracted data from three systematic reviews (30 articles in total) was compared to data extracted by Elicit and ChatGPT. The AI tools extracted population characteristics, study design, and review-specific variables. Performance metrics were calculated against human double-extracted data as the gold standard, followed by a detailed error analysis.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Results</h3>\u0000 \u0000 <p>Precision, recall and F1-score were all 92% for Elicit and 91%, 89% and 90% for ChatGPT. Recall was highest for study design (Elicit: 100%; ChatGPT: 90%) and population characteristics (Elicit: 100%; ChatGPT: 97%), while review-specific variables achieved 77% in Elicit and 80% in ChatGPT. Elicit had four instances of confabulation while ChatGPT had three. There was no significant difference between the two AI tools' performance (recall difference: 3.3% points, 95% CI: –5.2%–11.9%, <i>p</i> = 0.445).</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Conclusion</h3>\u0000 \u0000 <p>AI tools demonstrated high and similar performance in data extraction compared to human reviewers, particularly for standardized variables. Error analysis revealed confabulations in 4% of data points. We propose adopting AI-assisted extraction to replace the second human extractor, with the second human instead focusing on reconciling discrepancies between AI and the primary human extractor.</p>\u0000 </section>\u0000 </div>","PeriodicalId":100286,"journal":{"name":"Cochrane Evidence Synthesis and Methods","volume":"3 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cesm.70036","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144615305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Leslie A. Perdue, Shaina D. Trevino, Sean Grant, Jennifer S. Lin, Emily E. Tanner-Smith
{"title":"Creating Interactive Data Dashboards for Evidence Syntheses","authors":"Leslie A. Perdue, Shaina D. Trevino, Sean Grant, Jennifer S. Lin, Emily E. Tanner-Smith","doi":"10.1002/cesm.70035","DOIUrl":"https://doi.org/10.1002/cesm.70035","url":null,"abstract":"<p>Systematic review findings are typically disseminated via static outputs, such as scientific manuscripts, which can limit the accessibility and usability for diverse audiences. Interactive data dashboards transform systematic review data into dynamic, user-friendly visualizations, allowing deeper engagement with evidence synthesis findings. We propose a workflow for creating interactive dashboards to display evidence synthesis results, including three key phases: planning, development, and deployment. Planning involves defining the dashboard objectives and key audiences, selecting the appropriate software (e.g., Tableau or R Shiny) and preparing the data. Development includes designing a user-friendly interface and specifying interactive elements. Lastly, deployment focuses on making it available to users and utilizing user-testing. Throughout all phases, we emphasize seeking and incorporating interest-holder input and aligning dashboards with the intended audience's needs. To demonstrate this workflow, we provide two examples from previous systematic reviews. The first dashboard, created in Tableau, presents findings from a meta-analysis to support a U.S. Preventive Services Task Force recommendation on lipid disorder screening in children, while the second utilizes R Shiny to display data from a scoping review on the 4-day school week among K-12 students in the U.S. Both dashboards incorporate interactive elements to present complex evidence tailored to different interest-holders, including non-research audiences. Interactive dashboards can enhance the utility of evidence syntheses by providing a user-friendly tool for interest-holders to explore data relevant to their specific needs. This workflow can be adapted to create interactive dashboards in flexible formats to increase the use and accessibility of systematic review findings.</p>","PeriodicalId":100286,"journal":{"name":"Cochrane Evidence Synthesis and Methods","volume":"3 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cesm.70035","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144472977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Data Extractions Using a Large Language Model (Elicit) and Human Reviewers in Randomized Controlled Trials: A Systematic Comparison","authors":"Joleen Bianchi, Julian Hirt, Magdalena Vogt, Janine Vetsch","doi":"10.1002/cesm.70033","DOIUrl":"https://doi.org/10.1002/cesm.70033","url":null,"abstract":"<div>\u0000 \u0000 \u0000 <section>\u0000 \u0000 <h3> Aim</h3>\u0000 \u0000 <p>We aimed at comparing data extractions from randomized controlled trials by using Elicit and human reviewers.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Background</h3>\u0000 \u0000 <p>Elicit is an artificial intelligence tool which may automate specific steps in conducting systematic reviews. However, the tool's performance and accuracy have not been independently assessed.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Methods</h3>\u0000 \u0000 <p>For comparison, we sampled 20 randomized controlled trials of which data were extracted manually from a human reviewer. We assessed the variables study objectives, sample characteristics and size, study design, interventions, outcome measured, and intervention effects and classified the results into “more,” “equal to,” “partially equal,” and “deviating” extractions. STROBE checklist was used to report the study.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Results</h3>\u0000 \u0000 <p>We analysed 20 randomized controlled trials from 11 countries. The studies covered diverse healthcare topics. Across all seven variables, Elicit extracted “more” data in 29.3% of cases, “equal” in 20.7%, “partially equal” in 45.7%, and “deviating” in 4.3%. Elicit provided “more” information for the variable study design (100%) and sample characteristics (45%). In contrast, for more nuanced variables, such as “intervention effects,” Elicit's extractions were less detailed, with 95% rated as “partially equal.”</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Conclusions</h3>\u0000 \u0000 <p>Elicit was capable of extracting data partly correct for our predefined variables. Variables like “intervention effect” or “intervention” may require a human reviewer to complete the data extraction. Our results suggest that verification by human reviewers is necessary to ensure that all relevant information is captured completely and correctly by Elicit.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Implications</h3>\u0000 \u0000 <p>Systematic reviews are labor-intensive. Data extraction process may be facilitated by artificial intelligence tools. Use of Elicit may require a human reviewer to double-check the extracted data.</p>\u0000 </section>\u0000 </div>","PeriodicalId":100286,"journal":{"name":"Cochrane Evidence Synthesis and Methods","volume":"3 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cesm.70033","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144244667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Max Rubinstein, Sean Grant, Beth Ann Griffin, Seema Choksy Pessar, Bradley D. Stein
{"title":"Using GPT-4 for Title and Abstract Screening in a Literature Review of Public Policies: A Feasibility Study","authors":"Max Rubinstein, Sean Grant, Beth Ann Griffin, Seema Choksy Pessar, Bradley D. Stein","doi":"10.1002/cesm.70031","DOIUrl":"https://doi.org/10.1002/cesm.70031","url":null,"abstract":"<div>\u0000 \u0000 \u0000 <section>\u0000 \u0000 <h3> Introduction</h3>\u0000 \u0000 <p>We describe the first known use of large language models (LLMs) to screen titles and abstracts in a review of public policy literature. Our objective was to assess the percentage of articles GPT-4 recommended for exclusion that should have been included (“false exclusion rate”).</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Methods</h3>\u0000 \u0000 <p>We used GPT-4 to exclude articles from a database for a literature review of quantitative evaluations of federal and state policies addressing the opioid crisis. We exported our bibliographic database to a CSV file containing titles, abstracts, and keywords and asked GPT-4 to recommend whether to exclude each article. We conducted a preliminary testing of these recommendations using a subset of articles and a final test on a sample of the entire database. We designated a false exclusion rate of 10% as an adequate performance threshold.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Results</h3>\u0000 \u0000 <p>GPT-4 recommended excluding 41,742 of the 43,480 articles (96%) containing an abstract. Our preliminary test identified only one false exclusion; our final test identified no false exclusions, yielding an estimated false exclusion rate of 0.00 [0.00, 0.05]. Fewer than 1%—417 of the 41,742 articles—were incorrectly excluded. After manually assessing the eligibility of all remaining articles, we identified 608 of the 1738 articles that GPT-4 did not exclude: 65% of the articles recommended for inclusion should have been excluded.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Discussion/Conclusions</h3>\u0000 \u0000 <p>GPT-4 performed well at recommending articles to exclude from our literature review, resulting in substantial time and cost savings. A key limitation is that we did not use GPT-4 to determine inclusions, nor did our model perform well on this task. However, GPT-4 dramatically reduced the number of articles requiring review. Systematic reviewers should conduct performance evaluations to ensure that an LLM meets a minimally acceptable quality standard before relying on its recommendations.</p>\u0000 </section>\u0000 </div>","PeriodicalId":100286,"journal":{"name":"Cochrane Evidence Synthesis and Methods","volume":"3 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cesm.70031","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144108810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christopher James Rose, Jose Francisco Meneses-Echavez, Ashley Elizabeth Muller, Rigmor C. Berg, Tiril C. Borge, Patricia Sofia Jacobsen Jardim, Chris Cooper
{"title":"Artificial Intelligence and Machine Learning to Improve Evidence Synthesis Production Efficiency: An Observational Study of Resource Use and Time-to-Completion","authors":"Christopher James Rose, Jose Francisco Meneses-Echavez, Ashley Elizabeth Muller, Rigmor C. Berg, Tiril C. Borge, Patricia Sofia Jacobsen Jardim, Chris Cooper","doi":"10.1002/cesm.70030","DOIUrl":"https://doi.org/10.1002/cesm.70030","url":null,"abstract":"<div>\u0000 \u0000 \u0000 <section>\u0000 \u0000 <h3> Introduction</h3>\u0000 \u0000 <p>Evidence syntheses are crucial in healthcare and elsewhere but are resource-intensive, often taking years to produce. Artificial intelligence and machine learning (AI/ML) tools may improve production efficiency in certain review phases, but little is known about their impact on entire reviews.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Methods</h3>\u0000 \u0000 <p>We performed prespecified analyses of a convenience sample of eligible healthcare- or welfare-related reviews commissioned at the Norwegian Institute of Public Health between August 1 2020 (first commission to use AI/ML) and January 31 2023 (administrative cut-off). The main exposures were AI/ML use following an internal support team's recommendation versus no use. Ranking (e.g., priority screening), classification (e.g., study design), clustering (e.g., documents), and bibliometric analysis (e.g., OpenAlex) tools were included, but we did not include or exclude specific tools. Generative AI tools were not widely available during the study period. The outcomes were resources (person-hours) and time from commission to completion (approval for delivery, including peer review; weeks). Analyses accounted for nonrandomized assignment and censored outcomes (reviews ongoing at cut-off). Researchers classifying exposures were blinded to outcomes. The statistician was blinded to exposure.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Results</h3>\u0000 \u0000 <p>Among 39 reviews, 7 (18%) were health technology assessments versus systematic reviews, 19 (49%) focused on healthcare versus welfare, 18 (46%) planned meta-analysis, and 3 (8%) were ongoing at cut-off. AI/ML tools were used in 27 (69%) reviews. Reviews that used AI/ML as recommended used more resources (mean 667 vs. 291 person-hours) but were completed slightly faster (27.6 vs. 28.2 weeks). These differences were not statistically significant (relative resource use 3.71; 95% CI: 0.36–37.95; <i>p</i> = 0.269; relative time-to-completion: 0.92; 95% CI: 0.53–1.58; <i>p</i> = 0.753).</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Conclusions</h3>\u0000 \u0000 <p>Associations between AI/ML use and the outcomes remains uncertain. Multicenter studies or meta-analyses may be needed to determine if these tools meaningfully reduce resource use and time to produce evidence syntheses.</p>\u0000 </section>\u0000 </div>","PeriodicalId":100286,"journal":{"name":"Cochrane Evidence Synthesis and Methods","volume":"3 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cesm.70030","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144085014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Information Practice as Dialogue: The Case for Collaboration in Evidence Searching and Finding for More Complex Reviews","authors":"Parkhill Anne, Merner Bronwen, Ryan Rebecca","doi":"10.1002/cesm.70029","DOIUrl":"https://doi.org/10.1002/cesm.70029","url":null,"abstract":"<p>Cochrane Consumers and Communication Group's (CCC) approach to evidence searching has evolved over time in the context of Cochrane's rigorous methodological advice [<span>1, 2</span>]. CCC is a Cochrane review group responsible for coordinating the preparation and publication of evidence syntheses that affect the way people interact with healthcare professionals, services and researchers. CCC includes a highly skilled Information Specialist who collaborates with CCC author teams to design a rigorous search strategy to gather evidence to answer the review question. In this commentary, we discuss the transformation of the information practice of searching in CCC from being a largely technical exercise conducted solely by the Information Specialist to a collaborative dialogue between the Information Specialist and author teams.</p><p>A key reason for the transformation in our search methods has been that CCC reviews tend to be complex, with review questions that are generally not as easily answered as clinically focused reviews. Our research, and information practice specifically, is contextualized and guided by a three-way dynamic of patient preferences and experiences, research evidence, and professional expertize. The reviews are rigorous in their examination of evidence on people's healthcare interactions, including how people self-manage health and disease, understand screening, health and treatment, and negotiate and share decisions with healthcare professionals within systems and different settings. However, interventions to change behaviors, to educate, support and up-skill people to participate actively in their healthcare, are often complex, multifaceted and their effects evaluated via multiple diverse outcomes [<span>3</span>]. This complexity necessarily shapes our methods of information practice.</p><p>Early in the life of CCC and for many years, we viewed searching as a largely solitary technical exercise performed by a skilled Information Specialist following conventional, rigorous Cochrane search methods. Often this required labor-intensive search development, resulting in delays for search results and an excessive screening obligation (e.g., some review questions resulted in authors needing to screen more than 25,000 search results). As volume and complexity of literature in the health communication area increased, we moved towards search strategies developed with practicalities of reference screening in mind [<span>4, 5</span>]. We have since developed transparent and pragmatic search strategies by means of embedded and open dialogue [<span>6</span>] with authors. In the context of increasing topic complexity and rigorous information searching, this approach maximizes identification of relevant references while avoiding unmanageable reference numbers for screening.</p><p>In this commentary, we explore CCC's approach to searching and its evolution over time in the context of Cochrane's rigorous methodological advice. We illustrat","PeriodicalId":100286,"journal":{"name":"Cochrane Evidence Synthesis and Methods","volume":"3 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cesm.70029","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143883954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Martina Giltenane, Aoife O'Mahony, Mayara S. Bianchim, Andrew Booth, Angela Harden, Catherine Houghton, Emma F. France, Heather Ames, Kate Flemming, Katy Sutcliffe, Ruth Garside, Tomas Pantoja, Jane Noyes
{"title":"Assessing the reporting quality of published qualitative evidence syntheses in the cochrane library","authors":"Martina Giltenane, Aoife O'Mahony, Mayara S. Bianchim, Andrew Booth, Angela Harden, Catherine Houghton, Emma F. France, Heather Ames, Kate Flemming, Katy Sutcliffe, Ruth Garside, Tomas Pantoja, Jane Noyes","doi":"10.1002/cesm.70023","DOIUrl":"https://doi.org/10.1002/cesm.70023","url":null,"abstract":"<div>\u0000 \u0000 \u0000 <section>\u0000 \u0000 <h3> Background</h3>\u0000 \u0000 <p>Over ten years since the first qualitative evidence synthesis (QES) was published in the Cochrane Library, QES and mixed-methods reviews (MMR) with a qualitative component have become increasingly common and influential in healthcare research and policy development. The quality of such reviews and the completeness with which they are reported is therefore of paramount importance.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Aim</h3>\u0000 \u0000 <p>This review aimed to assess the reporting quality of published QESs and MMRs with a qualitative component in the Cochrane Library.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Methods</h3>\u0000 \u0000 <p>All published QESs and MMRs were identified from the Cochrane Library. A bespoke framework developed by key international experts based on the Effective Practice and Organisation of Care (EPOC), Enhancing Transparency in Reporting the Synthesis of Qualitative Research (ENTREQ) and meta-ethnography reporting guidance (eMERGe) was used to code the quality of reporting of QESs and MMRs.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Results</h3>\u0000 \u0000 <p>Thirty-one reviews were identified, including 11 MMRs. The reporting quality of the QESs and MMRs published by Cochrane varied considerably. Based on the criteria within our framework, just over a quarter (8, 26%) were considered to meet satisfactory reporting standards, 10 (32%) could have provided clearer or more detailed descriptions in their reporting, just over a quarter (8, 26%) provided poor quality or insufficient descriptions and five (16%) omitted descriptions relevant to our framework.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Conclusion</h3>\u0000 \u0000 <p>This assessment offers important insights into the reporting practices prevalent in these review types. Methodology and reporting have changed considerably over time. Earlier QES have not necessarily omitted important reporting components, but rather our understanding of what should be completed and reported has grown considerably. The variability in reporting quality within QESs and MMRs underscores the need to develop Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) specifically for QES.</p>\u0000 </section>\u0000 </div>","PeriodicalId":100286,"journal":{"name":"Cochrane Evidence Synthesis and Methods","volume":"3 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cesm.70023","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143836206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chris Cooper, Zahra Premji, Cem Yavuz, Mark Engelbert
{"title":"Should we adopt the case report format to report challenges in complicated evidence synthesis? A proposal and illustration of a case report of a complex search strategy for humanitarian interventions","authors":"Chris Cooper, Zahra Premji, Cem Yavuz, Mark Engelbert","doi":"10.1002/cesm.70021","DOIUrl":"https://doi.org/10.1002/cesm.70021","url":null,"abstract":"<p>Case reports represent a form of evidence in medicine which detail an unusual or novel clinical case in a short, published report, disseminated for the attention of clinical staff. This form of report is not common outside of clinical practice. We question if the adoption of the ‘case report’ might also be useful in evidence synthesis. This where the case represents a challenge in undertaking evidence synthesis and the report details not only the resolution but also shows the working to resolve the challenge. Our rationale is that methodological responses to problems arising in complicated evidence synthesis often go unreported. The risk is that lessons learned in developing evidence synthesis are lost if not recorded. This represents a form of research waste. We suggest that the adoption of the case report format might represent the opportunity to highlight not only a challenge (the case) but a worked example of a possible solution (the report). These case reports would represent a resting place for the case, with notes left behind for future researchers to follow. We provide an example of a case report: a complicated search strategy developed to inform an evidence gap map on the effects of interventions in humanitarian settings on food security outcomes in low and middle-income countries and specific high-income countries. Our report details the solution that we developed (the search strategy). We also illustrate how we conceptualised the search, and the approaches that we tested but rejected, and the ideas that we pursued.</p>","PeriodicalId":100286,"journal":{"name":"Cochrane Evidence Synthesis and Methods","volume":"3 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cesm.70021","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143826729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}