癌症成像中AI验证的技术和临床观点差异:注意差距!

IF 3.7 Q1 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING
Ioanna Chouvarda, Sara Colantonio, Ana S C Verde, Ana Jimenez-Pastor, Leonor Cerdá-Alberich, Yannick Metz, Lithin Zacharias, Shereen Nabhani-Gebara, Maciej Bobowicz, Gianna Tsakou, Karim Lekadir, Manolis Tsiknakis, Luis Martí-Bonmati, Nikolaos Papanikolaou
{"title":"癌症成像中AI验证的技术和临床观点差异:注意差距!","authors":"Ioanna Chouvarda, Sara Colantonio, Ana S C Verde, Ana Jimenez-Pastor, Leonor Cerdá-Alberich, Yannick Metz, Lithin Zacharias, Shereen Nabhani-Gebara, Maciej Bobowicz, Gianna Tsakou, Karim Lekadir, Manolis Tsiknakis, Luis Martí-Bonmati, Nikolaos Papanikolaou","doi":"10.1186/s41747-024-00543-0","DOIUrl":null,"url":null,"abstract":"<p><p>Good practices in artificial intelligence (AI) model validation are key for achieving trustworthy AI. Within the cancer imaging domain, attracting the attention of clinical and technical AI enthusiasts, this work discusses current gaps in AI validation strategies, examining existing practices that are common or variable across technical groups (TGs) and clinical groups (CGs). The work is based on a set of structured questions encompassing several AI validation topics, addressed to professionals working in AI for medical imaging. A total of 49 responses were obtained and analysed to identify trends and patterns. While TGs valued transparency and traceability the most, CGs pointed out the importance of explainability. Among the topics where TGs may benefit from further exposure are stability and robustness checks, and mitigation of fairness issues. On the other hand, CGs seemed more reluctant towards synthetic data for validation and would benefit from exposure to cross-validation techniques, or segmentation metrics. Topics emerging from the open questions were utility, capability, adoption and trustworthiness. These findings on current trends in AI validation strategies may guide the creation of guidelines necessary for training the next generation of professionals working with AI in healthcare and contribute to bridging any technical-clinical gap in AI validation. RELEVANCE STATEMENT: This study recognised current gaps in understanding and applying AI validation strategies in cancer imaging and helped promote trust and adoption for interdisciplinary teams of technical and clinical researchers. KEY POINTS: Clinical and technical researchers emphasise interpretability, external validation with diverse data, and bias awareness in AI validation for cancer imaging. In cancer imaging AI research, clinical researchers prioritise explainability, while technical researchers focus on transparency and traceability, and see potential in synthetic datasets. Researchers advocate for greater homogenisation of AI validation practices in cancer imaging.</p>","PeriodicalId":36926,"journal":{"name":"European Radiology Experimental","volume":"9 1","pages":"7"},"PeriodicalIF":3.7000,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11735720/pdf/","citationCount":"0","resultStr":"{\"title\":\"Differences in technical and clinical perspectives on AI validation in cancer imaging: mind the gap!\",\"authors\":\"Ioanna Chouvarda, Sara Colantonio, Ana S C Verde, Ana Jimenez-Pastor, Leonor Cerdá-Alberich, Yannick Metz, Lithin Zacharias, Shereen Nabhani-Gebara, Maciej Bobowicz, Gianna Tsakou, Karim Lekadir, Manolis Tsiknakis, Luis Martí-Bonmati, Nikolaos Papanikolaou\",\"doi\":\"10.1186/s41747-024-00543-0\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Good practices in artificial intelligence (AI) model validation are key for achieving trustworthy AI. Within the cancer imaging domain, attracting the attention of clinical and technical AI enthusiasts, this work discusses current gaps in AI validation strategies, examining existing practices that are common or variable across technical groups (TGs) and clinical groups (CGs). The work is based on a set of structured questions encompassing several AI validation topics, addressed to professionals working in AI for medical imaging. A total of 49 responses were obtained and analysed to identify trends and patterns. While TGs valued transparency and traceability the most, CGs pointed out the importance of explainability. Among the topics where TGs may benefit from further exposure are stability and robustness checks, and mitigation of fairness issues. On the other hand, CGs seemed more reluctant towards synthetic data for validation and would benefit from exposure to cross-validation techniques, or segmentation metrics. Topics emerging from the open questions were utility, capability, adoption and trustworthiness. These findings on current trends in AI validation strategies may guide the creation of guidelines necessary for training the next generation of professionals working with AI in healthcare and contribute to bridging any technical-clinical gap in AI validation. RELEVANCE STATEMENT: This study recognised current gaps in understanding and applying AI validation strategies in cancer imaging and helped promote trust and adoption for interdisciplinary teams of technical and clinical researchers. KEY POINTS: Clinical and technical researchers emphasise interpretability, external validation with diverse data, and bias awareness in AI validation for cancer imaging. In cancer imaging AI research, clinical researchers prioritise explainability, while technical researchers focus on transparency and traceability, and see potential in synthetic datasets. Researchers advocate for greater homogenisation of AI validation practices in cancer imaging.</p>\",\"PeriodicalId\":36926,\"journal\":{\"name\":\"European Radiology Experimental\",\"volume\":\"9 1\",\"pages\":\"7\"},\"PeriodicalIF\":3.7000,\"publicationDate\":\"2025-01-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11735720/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"European Radiology Experimental\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1186/s41747-024-00543-0\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"European Radiology Experimental","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1186/s41747-024-00543-0","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0

摘要

人工智能(AI)模型验证的良好实践是实现可信赖的人工智能的关键。在癌症成像领域,吸引了临床和技术人工智能爱好者的注意,这项工作讨论了人工智能验证策略中的当前差距,检查了技术组(tg)和临床组(cg)之间常见或可变的现有实践。这项工作是基于一组结构化的问题,其中包括几个人工智能验证主题,面向从事人工智能医学成像的专业人员。共收集和分析了49份答复,以确定趋势和模式。虽然tg最重视透明度和可追溯性,但cg指出了可解释性的重要性。tg可能从进一步暴露中受益的主题包括稳定性和稳健性检查,以及减轻公平性问题。另一方面,cg似乎更不愿意使用合成数据进行验证,并将受益于交叉验证技术或分割指标。从悬而未决的问题中出现的主题是效用、能力、采用和可信度。这些关于人工智能验证策略当前趋势的研究结果可能指导制定必要的指导方针,以培训在医疗保健领域使用人工智能的下一代专业人员,并有助于弥合人工智能验证方面的任何技术-临床差距。相关声明:本研究认识到目前在癌症成像中理解和应用人工智能验证策略方面的差距,并有助于促进跨学科技术和临床研究团队的信任和采用。重点:临床和技术研究人员强调可解释性,不同数据的外部验证,以及癌症成像人工智能验证的偏见意识。在癌症成像人工智能研究中,临床研究人员优先考虑可解释性,而技术研究人员则关注透明度和可追溯性,并看到了合成数据集的潜力。研究人员提倡在癌症成像中对人工智能验证实践进行更大的均质化。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Differences in technical and clinical perspectives on AI validation in cancer imaging: mind the gap!

Good practices in artificial intelligence (AI) model validation are key for achieving trustworthy AI. Within the cancer imaging domain, attracting the attention of clinical and technical AI enthusiasts, this work discusses current gaps in AI validation strategies, examining existing practices that are common or variable across technical groups (TGs) and clinical groups (CGs). The work is based on a set of structured questions encompassing several AI validation topics, addressed to professionals working in AI for medical imaging. A total of 49 responses were obtained and analysed to identify trends and patterns. While TGs valued transparency and traceability the most, CGs pointed out the importance of explainability. Among the topics where TGs may benefit from further exposure are stability and robustness checks, and mitigation of fairness issues. On the other hand, CGs seemed more reluctant towards synthetic data for validation and would benefit from exposure to cross-validation techniques, or segmentation metrics. Topics emerging from the open questions were utility, capability, adoption and trustworthiness. These findings on current trends in AI validation strategies may guide the creation of guidelines necessary for training the next generation of professionals working with AI in healthcare and contribute to bridging any technical-clinical gap in AI validation. RELEVANCE STATEMENT: This study recognised current gaps in understanding and applying AI validation strategies in cancer imaging and helped promote trust and adoption for interdisciplinary teams of technical and clinical researchers. KEY POINTS: Clinical and technical researchers emphasise interpretability, external validation with diverse data, and bias awareness in AI validation for cancer imaging. In cancer imaging AI research, clinical researchers prioritise explainability, while technical researchers focus on transparency and traceability, and see potential in synthetic datasets. Researchers advocate for greater homogenisation of AI validation practices in cancer imaging.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
European Radiology Experimental
European Radiology Experimental Medicine-Radiology, Nuclear Medicine and Imaging
CiteScore
6.70
自引率
2.60%
发文量
56
审稿时长
18 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信