Francesco Giganti, Nadia Moreira da Silva, Michael Yeung, Lucy Davies, Amy Frary, Mirjana Ferrer Rodriguez, Nikita Sushentsev, Nicholas Ashley, Adrian Andreou, Alison Bradley, Chris Wilson, Giles Maskell, Giorgio Brembilla, Iztok Caglic, Jakub Suchánek, Jobie Budd, Zobair Arya, Jonathan Aning, John Hayes, Mark De Bono, Nikhil Vasdev, Nimalan Sanmugalingam, Paul Burn, Raj Persad, Ramona Woitek, Richard Hindley, Sidath Liyanage, Sophie Squire, Tristan Barrett, Steffi Barwick, Mark Hinton, Anwar R Padhani, Antony Rix, Aarti Shah, Evis Sala
{"title":"人工智能前列腺癌检测:一项多中心、多扫描仪验证研究。","authors":"Francesco Giganti, Nadia Moreira da Silva, Michael Yeung, Lucy Davies, Amy Frary, Mirjana Ferrer Rodriguez, Nikita Sushentsev, Nicholas Ashley, Adrian Andreou, Alison Bradley, Chris Wilson, Giles Maskell, Giorgio Brembilla, Iztok Caglic, Jakub Suchánek, Jobie Budd, Zobair Arya, Jonathan Aning, John Hayes, Mark De Bono, Nikhil Vasdev, Nimalan Sanmugalingam, Paul Burn, Raj Persad, Ramona Woitek, Richard Hindley, Sidath Liyanage, Sophie Squire, Tristan Barrett, Steffi Barwick, Mark Hinton, Anwar R Padhani, Antony Rix, Aarti Shah, Evis Sala","doi":"10.1007/s00330-024-11323-0","DOIUrl":null,"url":null,"abstract":"<p><strong>Objectives: </strong>Multi-centre, multi-vendor validation of artificial intelligence (AI) software to detect clinically significant prostate cancer (PCa) using multiparametric magnetic resonance imaging (MRI) is lacking. We compared a new AI solution, validated on a separate dataset from different UK hospitals, to the original multidisciplinary team (MDT)-supported radiologist's interpretations.</p><p><strong>Materials and methods: </strong>A Conformité Européenne (CE)-marked deep-learning (DL) computer-aided detection (CAD) medical device (Pi) was trained to detect Gleason Grade Group (GG) ≥ 2 cancer using retrospective data from the PROSTATEx dataset and five UK hospitals (793 patients). Our separate validation dataset was on six machines from two manufacturers across six sites (252 patients). Data included in the study were from MRI scans performed between August 2018 to October 2022. Patients with a negative MRI who did not undergo biopsy were assumed to be negative (90.4% had prostate-specific antigen density < 0.15 ng/mL<sup>2</sup>). ROC analysis was used to compare radiologists who used a 5-category suspicion score.</p><p><strong>Results: </strong>GG ≥ 2 prevalence in the validation set was 31%. Evaluated per patient, Pi was non-inferior to radiologists (considering a 10% performance difference as acceptable), with an area under the curve (AUC) of 0.91 vs. 0.95. At the predetermined risk threshold of 3.5, the AI software's sensitivity was 95% and specificity 67%, while radiologists at Prostate Imaging-Reporting and Data Systems/Likert ≥ 3 identified GG ≥ 2 with a sensitivity of 99% and specificity of 73%. AI performed well per-site (AUC ≥ 0.83) at the patient-level independent of scanner age and field strength.</p><p><strong>Conclusion: </strong>Real-world data testing suggests that Pi matches the performance of MDT-supported radiologists in GG ≥ 2 PCa detection and generalises to multiple sites, scanner vendors, and models.</p><p><strong>Key points: </strong>QuestionThe performance of artificial intelligence-based medical tools for prostate MRI has yet to be evaluated on multi-centre, multi-vendor data to assess generalisability. FindingsA dedicated AI medical tool matches the performance of multidisciplinary team-supported radiologists in prostate cancer detection and generalises to multiple sites and scanners. Clinical relevanceThis software has the potential to support the MRI process for biopsy decision-making and target identification, but future prospective studies, where lesions identified by artificial intelligence are biopsied separately, are needed.</p>","PeriodicalId":12076,"journal":{"name":"European Radiology","volume":" ","pages":"4915-4924"},"PeriodicalIF":4.7000,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12226644/pdf/","citationCount":"0","resultStr":"{\"title\":\"AI-powered prostate cancer detection: a multi-centre, multi-scanner validation study.\",\"authors\":\"Francesco Giganti, Nadia Moreira da Silva, Michael Yeung, Lucy Davies, Amy Frary, Mirjana Ferrer Rodriguez, Nikita Sushentsev, Nicholas Ashley, Adrian Andreou, Alison Bradley, Chris Wilson, Giles Maskell, Giorgio Brembilla, Iztok Caglic, Jakub Suchánek, Jobie Budd, Zobair Arya, Jonathan Aning, John Hayes, Mark De Bono, Nikhil Vasdev, Nimalan Sanmugalingam, Paul Burn, Raj Persad, Ramona Woitek, Richard Hindley, Sidath Liyanage, Sophie Squire, Tristan Barrett, Steffi Barwick, Mark Hinton, Anwar R Padhani, Antony Rix, Aarti Shah, Evis Sala\",\"doi\":\"10.1007/s00330-024-11323-0\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Objectives: </strong>Multi-centre, multi-vendor validation of artificial intelligence (AI) software to detect clinically significant prostate cancer (PCa) using multiparametric magnetic resonance imaging (MRI) is lacking. We compared a new AI solution, validated on a separate dataset from different UK hospitals, to the original multidisciplinary team (MDT)-supported radiologist's interpretations.</p><p><strong>Materials and methods: </strong>A Conformité Européenne (CE)-marked deep-learning (DL) computer-aided detection (CAD) medical device (Pi) was trained to detect Gleason Grade Group (GG) ≥ 2 cancer using retrospective data from the PROSTATEx dataset and five UK hospitals (793 patients). Our separate validation dataset was on six machines from two manufacturers across six sites (252 patients). Data included in the study were from MRI scans performed between August 2018 to October 2022. Patients with a negative MRI who did not undergo biopsy were assumed to be negative (90.4% had prostate-specific antigen density < 0.15 ng/mL<sup>2</sup>). ROC analysis was used to compare radiologists who used a 5-category suspicion score.</p><p><strong>Results: </strong>GG ≥ 2 prevalence in the validation set was 31%. Evaluated per patient, Pi was non-inferior to radiologists (considering a 10% performance difference as acceptable), with an area under the curve (AUC) of 0.91 vs. 0.95. At the predetermined risk threshold of 3.5, the AI software's sensitivity was 95% and specificity 67%, while radiologists at Prostate Imaging-Reporting and Data Systems/Likert ≥ 3 identified GG ≥ 2 with a sensitivity of 99% and specificity of 73%. AI performed well per-site (AUC ≥ 0.83) at the patient-level independent of scanner age and field strength.</p><p><strong>Conclusion: </strong>Real-world data testing suggests that Pi matches the performance of MDT-supported radiologists in GG ≥ 2 PCa detection and generalises to multiple sites, scanner vendors, and models.</p><p><strong>Key points: </strong>QuestionThe performance of artificial intelligence-based medical tools for prostate MRI has yet to be evaluated on multi-centre, multi-vendor data to assess generalisability. FindingsA dedicated AI medical tool matches the performance of multidisciplinary team-supported radiologists in prostate cancer detection and generalises to multiple sites and scanners. Clinical relevanceThis software has the potential to support the MRI process for biopsy decision-making and target identification, but future prospective studies, where lesions identified by artificial intelligence are biopsied separately, are needed.</p>\",\"PeriodicalId\":12076,\"journal\":{\"name\":\"European Radiology\",\"volume\":\" \",\"pages\":\"4915-4924\"},\"PeriodicalIF\":4.7000,\"publicationDate\":\"2025-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12226644/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"European Radiology\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1007/s00330-024-11323-0\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/2/28 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q1\",\"JCRName\":\"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"European Radiology","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1007/s00330-024-11323-0","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/2/28 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0
摘要
目的:目前缺乏多中心、多供应商对人工智能(AI)软件使用多参数磁共振成像(MRI)检测临床意义重大的前列腺癌(PCa)的验证。我们比较了一种新的人工智能解决方案,该解决方案在来自不同英国医院的单独数据集上进行了验证,与原始的多学科团队(MDT)支持的放射科医生的解释进行了比较。材料和方法:使用来自PROSTATEx数据集和5家英国医院(793例患者)的回顾性数据,训练一个conformit europ (CE)标记的深度学习(DL)计算机辅助检测(CAD)医疗设备(Pi)来检测Gleason Grade Group (GG)≥2级癌症。我们单独的验证数据集来自六个站点(252名患者)的两个制造商的六台机器。该研究中包含的数据来自2018年8月至2022年10月期间进行的MRI扫描。未行活检的MRI阴性患者被认为是阴性的(90.4%有前列腺特异性抗原密度2)。ROC分析用于比较使用5类怀疑评分的放射科医生。结果:验证集中GG≥2的患病率为31%。对每名患者进行评估,Pi并不逊于放射科医生(考虑到10%的表现差异是可接受的),曲线下面积(AUC)为0.91比0.95。在预先确定的风险阈值为3.5时,人工智能软件的灵敏度为95%,特异性为67%,而前列腺成像报告和数据系统/Likert≥3的放射科医生识别GG≥2的灵敏度为99%,特异性为73%。AI在患者水平上表现良好(AUC≥0.83),与扫描仪年龄和场强无关。结论:实际数据测试表明,Pi与mdt支持的放射科医生在GG≥2 PCa检测中的表现相匹配,并适用于多个站点、扫描仪供应商和模型。基于人工智能的前列腺MRI医疗工具的性能尚未在多中心、多供应商的数据上进行评估,以评估其通用性。专门的人工智能医疗工具与多学科团队支持的放射科医生在前列腺癌检测方面的表现相匹配,并推广到多个部位和扫描仪。该软件具有支持MRI活检决策和目标识别的潜力,但未来的前瞻性研究需要人工智能识别的病变单独活检。
AI-powered prostate cancer detection: a multi-centre, multi-scanner validation study.
Objectives: Multi-centre, multi-vendor validation of artificial intelligence (AI) software to detect clinically significant prostate cancer (PCa) using multiparametric magnetic resonance imaging (MRI) is lacking. We compared a new AI solution, validated on a separate dataset from different UK hospitals, to the original multidisciplinary team (MDT)-supported radiologist's interpretations.
Materials and methods: A Conformité Européenne (CE)-marked deep-learning (DL) computer-aided detection (CAD) medical device (Pi) was trained to detect Gleason Grade Group (GG) ≥ 2 cancer using retrospective data from the PROSTATEx dataset and five UK hospitals (793 patients). Our separate validation dataset was on six machines from two manufacturers across six sites (252 patients). Data included in the study were from MRI scans performed between August 2018 to October 2022. Patients with a negative MRI who did not undergo biopsy were assumed to be negative (90.4% had prostate-specific antigen density < 0.15 ng/mL2). ROC analysis was used to compare radiologists who used a 5-category suspicion score.
Results: GG ≥ 2 prevalence in the validation set was 31%. Evaluated per patient, Pi was non-inferior to radiologists (considering a 10% performance difference as acceptable), with an area under the curve (AUC) of 0.91 vs. 0.95. At the predetermined risk threshold of 3.5, the AI software's sensitivity was 95% and specificity 67%, while radiologists at Prostate Imaging-Reporting and Data Systems/Likert ≥ 3 identified GG ≥ 2 with a sensitivity of 99% and specificity of 73%. AI performed well per-site (AUC ≥ 0.83) at the patient-level independent of scanner age and field strength.
Conclusion: Real-world data testing suggests that Pi matches the performance of MDT-supported radiologists in GG ≥ 2 PCa detection and generalises to multiple sites, scanner vendors, and models.
Key points: QuestionThe performance of artificial intelligence-based medical tools for prostate MRI has yet to be evaluated on multi-centre, multi-vendor data to assess generalisability. FindingsA dedicated AI medical tool matches the performance of multidisciplinary team-supported radiologists in prostate cancer detection and generalises to multiple sites and scanners. Clinical relevanceThis software has the potential to support the MRI process for biopsy decision-making and target identification, but future prospective studies, where lesions identified by artificial intelligence are biopsied separately, are needed.
期刊介绍:
European Radiology (ER) continuously updates scientific knowledge in radiology by publication of strong original articles and state-of-the-art reviews written by leading radiologists. A well balanced combination of review articles, original papers, short communications from European radiological congresses and information on society matters makes ER an indispensable source for current information in this field.
This is the Journal of the European Society of Radiology, and the official journal of a number of societies.
From 2004-2008 supplements to European Radiology were published under its companion, European Radiology Supplements, ISSN 1613-3749.