Methodological evaluation of original articles on radiomics and machine learning for outcome prediction based on positron emission tomography (PET).

IF 1.2
Nuklearmedizin. Nuclear medicine Pub Date : 2023-12-01 Epub Date: 2023-11-23 DOI:10.1055/a-2198-0545
Julian Manuel Michael Rogasch, Kuangyu Shi, David Kersting, Robert Seifert
{"title":"Methodological evaluation of original articles on radiomics and machine learning for outcome prediction based on positron emission tomography (PET).","authors":"Julian Manuel Michael Rogasch, Kuangyu Shi, David Kersting, Robert Seifert","doi":"10.1055/a-2198-0545","DOIUrl":null,"url":null,"abstract":"<p><strong>Aim: </strong>Despite a vast number of articles on radiomics and machine learning in positron emission tomography (PET) imaging, clinical applicability remains limited, partly owing to poor methodological quality. We therefore systematically investigated the methodology described in publications on radiomics and machine learning for PET-based outcome prediction.</p><p><strong>Methods: </strong>A systematic search for original articles was run on PubMed. All articles were rated according to 17 criteria proposed by the authors. Criteria with >2 rating categories were binarized into \"adequate\" or \"inadequate\". The association between the number of \"adequate\" criteria per article and the date of publication was examined.</p><p><strong>Results: </strong>One hundred articles were identified (published between 07/2017 and 09/2023). The median proportion of articles per criterion that were rated \"adequate\" was 65% (range: 23-98%). Nineteen articles (19%) mentioned neither a test cohort nor cross-validation to separate training from testing. The median number of criteria with an \"adequate\" rating per article was 12.5 out of 17 (range, 4-17), and this did not increase with later dates of publication (Spearman's rho, 0.094; p = 0.35). In 22 articles (22%), less than half of the items were rated \"adequate\". Only 8% of articles published the source code, and 10% made the dataset openly available.</p><p><strong>Conclusion: </strong>Among the articles investigated, methodological weaknesses have been identified, and the degree of compliance with recommendations on methodological quality and reporting shows potential for improvement. Better adherence to established guidelines could increase the clinical significance of radiomics and machine learning for PET-based outcome prediction and finally lead to the widespread use in routine clinical practice.</p>","PeriodicalId":94161,"journal":{"name":"Nuklearmedizin. Nuclear medicine","volume":"62 6","pages":"361-369"},"PeriodicalIF":1.2000,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10667066/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Nuklearmedizin. Nuclear medicine","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1055/a-2198-0545","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2023/11/23 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Aim: Despite a vast number of articles on radiomics and machine learning in positron emission tomography (PET) imaging, clinical applicability remains limited, partly owing to poor methodological quality. We therefore systematically investigated the methodology described in publications on radiomics and machine learning for PET-based outcome prediction.

Methods: A systematic search for original articles was run on PubMed. All articles were rated according to 17 criteria proposed by the authors. Criteria with >2 rating categories were binarized into "adequate" or "inadequate". The association between the number of "adequate" criteria per article and the date of publication was examined.

Results: One hundred articles were identified (published between 07/2017 and 09/2023). The median proportion of articles per criterion that were rated "adequate" was 65% (range: 23-98%). Nineteen articles (19%) mentioned neither a test cohort nor cross-validation to separate training from testing. The median number of criteria with an "adequate" rating per article was 12.5 out of 17 (range, 4-17), and this did not increase with later dates of publication (Spearman's rho, 0.094; p = 0.35). In 22 articles (22%), less than half of the items were rated "adequate". Only 8% of articles published the source code, and 10% made the dataset openly available.

Conclusion: Among the articles investigated, methodological weaknesses have been identified, and the degree of compliance with recommendations on methodological quality and reporting shows potential for improvement. Better adherence to established guidelines could increase the clinical significance of radiomics and machine learning for PET-based outcome prediction and finally lead to the widespread use in routine clinical practice.

Abstract Image

基于正电子发射断层扫描(PET)的放射组学和机器学习结果预测的原始文章方法学评价。
目的:尽管有大量关于正电子发射断层扫描(PET)成像中的放射组学和机器学习的文章,但临床适用性仍然有限,部分原因是方法质量差。因此,我们系统地研究了放射组学和机器学习出版物中描述的用于pet结果预测的方法。方法:系统检索PubMed上的原创文章。根据作者提出的17项标准对所有文章进行评分。有>2个评级类别的标准被二值化为“充分”或“不充分”。研究了每篇文章“适当”标准的数目与出版日期之间的关系。结果:共检索到100篇文献(发表时间为2017年7月至2023年9月)。每个标准被评为“适当”的文章中位数比例为65%(范围:23-98%)。19篇文章(19%)既没有提到测试队列,也没有提到将训练与测试分开的交叉验证。每篇文章被评为“适当”的标准中位数为12.5(范围,4-17),并且随着发表日期的推迟,这一数字没有增加(Spearman’s rho, 0.094;P = 0.35)。在22篇文章(22%)中,不到一半的项目被评为“适当”。只有8%的文章发布了源代码,10%的文章公开了数据集。结论:在所调查的文章中,已经确定了方法学上的弱点,并且对方法学质量和报告建议的遵守程度显示出改进的潜力。更好地遵守既定的指南可以增加放射组学和机器学习在基于pet的预后预测中的临床意义,并最终在常规临床实践中得到广泛应用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信