Cynthia Lokker, Wael Abdelkader, Elham Bagheri, Rick Parrish, Chris Cotoi, Tamara Navarro, Federico Germini, Lori-Ann Linkins, R Brian Haynes, Lingyang Chu, Muhammad Afzal, Alfonso Iorio
{"title":"利用 LightGBM 提高临床文献监测系统的效率。","authors":"Cynthia Lokker, Wael Abdelkader, Elham Bagheri, Rick Parrish, Chris Cotoi, Tamara Navarro, Federico Germini, Lori-Ann Linkins, R Brian Haynes, Lingyang Chu, Muhammad Afzal, Alfonso Iorio","doi":"10.1371/journal.pdig.0000299","DOIUrl":null,"url":null,"abstract":"<p><p>Given the suboptimal performance of Boolean searching to identify methodologically sound and clinically relevant studies in large bibliographic databases, exploring machine learning (ML) to efficiently classify studies is warranted. To boost the efficiency of a literature surveillance program, we used a large internationally recognized dataset of articles tagged for methodological rigor and applied an automated ML approach to train and test binary classification models to predict the probability of clinical research articles being of high methodologic quality. We trained over 12,000 models on a dataset of titles and abstracts of 97,805 articles indexed in PubMed from 2012-2018 which were manually appraised for rigor by highly trained research associates and rated for clinical relevancy by practicing clinicians. As the dataset is unbalanced, with more articles that do not meet the criteria for rigor, we used the unbalanced dataset and over- and under-sampled datasets. Models that maintained sensitivity for high rigor at 99% and maximized specificity were selected and tested in a retrospective set of 30,424 articles from 2020 and validated prospectively in a blinded study of 5253 articles. The final selected algorithm, combining a LightGBM (gradient boosting machine) model trained in each dataset, maintained high sensitivity and achieved 57% specificity in the retrospective validation test and 53% in the prospective study. The number of articles needed to read to find one that met appraisal criteria was 3.68 (95% CI 3.52 to 3.85) in the prospective study, compared with 4.63 (95% CI 4.50 to 4.77) when relying only on Boolean searching. Gradient-boosting ML models reduced the work required to classify high quality clinical research studies by 45%, improving the efficiency of literature surveillance and subsequent dissemination to clinicians and other evidence users.</p>","PeriodicalId":74465,"journal":{"name":"PLOS digital health","volume":"3 9","pages":"e0000299"},"PeriodicalIF":0.0000,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11419392/pdf/","citationCount":"0","resultStr":"{\"title\":\"Boosting efficiency in a clinical literature surveillance system with LightGBM.\",\"authors\":\"Cynthia Lokker, Wael Abdelkader, Elham Bagheri, Rick Parrish, Chris Cotoi, Tamara Navarro, Federico Germini, Lori-Ann Linkins, R Brian Haynes, Lingyang Chu, Muhammad Afzal, Alfonso Iorio\",\"doi\":\"10.1371/journal.pdig.0000299\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Given the suboptimal performance of Boolean searching to identify methodologically sound and clinically relevant studies in large bibliographic databases, exploring machine learning (ML) to efficiently classify studies is warranted. To boost the efficiency of a literature surveillance program, we used a large internationally recognized dataset of articles tagged for methodological rigor and applied an automated ML approach to train and test binary classification models to predict the probability of clinical research articles being of high methodologic quality. We trained over 12,000 models on a dataset of titles and abstracts of 97,805 articles indexed in PubMed from 2012-2018 which were manually appraised for rigor by highly trained research associates and rated for clinical relevancy by practicing clinicians. As the dataset is unbalanced, with more articles that do not meet the criteria for rigor, we used the unbalanced dataset and over- and under-sampled datasets. Models that maintained sensitivity for high rigor at 99% and maximized specificity were selected and tested in a retrospective set of 30,424 articles from 2020 and validated prospectively in a blinded study of 5253 articles. The final selected algorithm, combining a LightGBM (gradient boosting machine) model trained in each dataset, maintained high sensitivity and achieved 57% specificity in the retrospective validation test and 53% in the prospective study. The number of articles needed to read to find one that met appraisal criteria was 3.68 (95% CI 3.52 to 3.85) in the prospective study, compared with 4.63 (95% CI 4.50 to 4.77) when relying only on Boolean searching. Gradient-boosting ML models reduced the work required to classify high quality clinical research studies by 45%, improving the efficiency of literature surveillance and subsequent dissemination to clinicians and other evidence users.</p>\",\"PeriodicalId\":74465,\"journal\":{\"name\":\"PLOS digital health\",\"volume\":\"3 9\",\"pages\":\"e0000299\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11419392/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"PLOS digital health\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1371/journal.pdig.0000299\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/9/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"PLOS digital health","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1371/journal.pdig.0000299","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/9/1 0:00:00","PubModel":"eCollection","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
鉴于布尔搜索在大型文献数据库中识别方法可靠且与临床相关的研究方面表现不佳,因此有必要探索机器学习(ML)来对研究进行有效分类。为了提高文献监测计划的效率,我们使用了一个国际公认的大型数据集,其中包含了方法学严谨性标记的文章,并应用自动化的 ML 方法来训练和测试二元分类模型,以预测临床研究文章具有高方法学质量的概率。我们在 2012-2018 年期间被 PubM 索引的 97,805 篇文章的标题和摘要数据集上训练了 12,000 多个模型,这些数据集由训练有素的研究人员对其严谨性进行人工评估,并由执业临床医生对其临床相关性进行评级。由于数据集不平衡,不符合严谨性标准的文章较多,因此我们使用了不平衡的数据集以及过度采样和采样不足的数据集。我们从 2020 年的 30424 篇文章中选择并测试了对高严谨性的灵敏度保持在 99%、特异性最大化的模型,并在对 5253 篇文章的盲法研究中进行了前瞻性验证。最终选定的算法结合了在每个数据集中训练的LightGBM(梯度提升机)模型,在回顾性验证测试中保持了较高的灵敏度,特异性达到57%,在前瞻性研究中达到53%。在前瞻性研究中,找到一篇符合鉴定标准的文章所需的阅读篇数为 3.68(95% CI 3.52 至 3.85)篇,而仅依靠布尔搜索时为 4.63(95% CI 4.50 至 4.77)篇。梯度提升 ML 模型将高质量临床研究分类所需的工作量减少了 45%,提高了文献监测以及随后向临床医生和其他证据使用者传播的效率。
Boosting efficiency in a clinical literature surveillance system with LightGBM.
Given the suboptimal performance of Boolean searching to identify methodologically sound and clinically relevant studies in large bibliographic databases, exploring machine learning (ML) to efficiently classify studies is warranted. To boost the efficiency of a literature surveillance program, we used a large internationally recognized dataset of articles tagged for methodological rigor and applied an automated ML approach to train and test binary classification models to predict the probability of clinical research articles being of high methodologic quality. We trained over 12,000 models on a dataset of titles and abstracts of 97,805 articles indexed in PubMed from 2012-2018 which were manually appraised for rigor by highly trained research associates and rated for clinical relevancy by practicing clinicians. As the dataset is unbalanced, with more articles that do not meet the criteria for rigor, we used the unbalanced dataset and over- and under-sampled datasets. Models that maintained sensitivity for high rigor at 99% and maximized specificity were selected and tested in a retrospective set of 30,424 articles from 2020 and validated prospectively in a blinded study of 5253 articles. The final selected algorithm, combining a LightGBM (gradient boosting machine) model trained in each dataset, maintained high sensitivity and achieved 57% specificity in the retrospective validation test and 53% in the prospective study. The number of articles needed to read to find one that met appraisal criteria was 3.68 (95% CI 3.52 to 3.85) in the prospective study, compared with 4.63 (95% CI 4.50 to 4.77) when relying only on Boolean searching. Gradient-boosting ML models reduced the work required to classify high quality clinical research studies by 45%, improving the efficiency of literature surveillance and subsequent dissemination to clinicians and other evidence users.