{"title":"PubMed captures more fine-grained bibliographic data on scientific commentary than Web of Science: a comparative analysis.","authors":"Shuang Wang, Kai Zhang, Jian Du","doi":"10.1136/bmjhci-2024-101017","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Research commentaries have the potential for evidence appraisal in emphasising, correcting, shaping and disseminating scientific knowledge.</p><p><strong>Objectives: </strong>To identify the appropriate bibliographic source for capturing commentary information, this study compares comment data in PubMed and Web of Science (WoS) to assess their applicability in evidence appraisal.</p><p><strong>Methods: </strong>Using COVID-19 as a case study, with over 27 k COVID-19 papers in PubMed as a baseline, we designed a comparative analysis for commented-commenting relations in two databases from the same dataset pool, making a fair and reliable comparison. We constructed comment networks for each database for network structural analysis and compared the characteristics of commentary materials and commented papers from various facets.</p><p><strong>Results: </strong>For network comparison, PubMed surpasses WoS with more closed feedback loops, reaching a deeper six-level network compared with WoS' four levels, making PubMed well-suited for evidence appraisal through argument mining. PubMed excels in identifying specialised comments, displaying significantly lower author count (mean, 3.59) and page count (mean, 1.86) than WoS (authors, 4.31, 95% CI of difference of two means = [0.66, 0.79], p<0.001; pages, 2.80, 95% CI of difference of two means = [0.87, 1.01], p<0.001), attributed to PubMed's CICO comment identification algorithm. Commented papers in PubMed also demonstrate higher citations and stronger sentiments, especially significantly elevated disputed rates (PubMed, 24.54%; WoS, 18.8%; baseline, 8.3%; all p<0.0001). Additionally, commented papers in both sources exhibit superior network centrality metrics compared with WoS-only counterparts.</p><p><strong>Conclusion: </strong>Considering the impact and controversy of commented works, the accuracy of comments and the depth of network interactions, PubMed potentially serves as a valuable resource in evidence appraisal and detection of controversial issues compared with WoS.</p>","PeriodicalId":9050,"journal":{"name":"BMJ Health & Care Informatics","volume":null,"pages":null},"PeriodicalIF":4.1000,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11474939/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"BMJ Health & Care Informatics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1136/bmjhci-2024-101017","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
引用次数: 0
Abstract
Background: Research commentaries have the potential for evidence appraisal in emphasising, correcting, shaping and disseminating scientific knowledge.
Objectives: To identify the appropriate bibliographic source for capturing commentary information, this study compares comment data in PubMed and Web of Science (WoS) to assess their applicability in evidence appraisal.
Methods: Using COVID-19 as a case study, with over 27 k COVID-19 papers in PubMed as a baseline, we designed a comparative analysis for commented-commenting relations in two databases from the same dataset pool, making a fair and reliable comparison. We constructed comment networks for each database for network structural analysis and compared the characteristics of commentary materials and commented papers from various facets.
Results: For network comparison, PubMed surpasses WoS with more closed feedback loops, reaching a deeper six-level network compared with WoS' four levels, making PubMed well-suited for evidence appraisal through argument mining. PubMed excels in identifying specialised comments, displaying significantly lower author count (mean, 3.59) and page count (mean, 1.86) than WoS (authors, 4.31, 95% CI of difference of two means = [0.66, 0.79], p<0.001; pages, 2.80, 95% CI of difference of two means = [0.87, 1.01], p<0.001), attributed to PubMed's CICO comment identification algorithm. Commented papers in PubMed also demonstrate higher citations and stronger sentiments, especially significantly elevated disputed rates (PubMed, 24.54%; WoS, 18.8%; baseline, 8.3%; all p<0.0001). Additionally, commented papers in both sources exhibit superior network centrality metrics compared with WoS-only counterparts.
Conclusion: Considering the impact and controversy of commented works, the accuracy of comments and the depth of network interactions, PubMed potentially serves as a valuable resource in evidence appraisal and detection of controversial issues compared with WoS.