Shima Behzad, Seyed M Hossein Tabatabaei, Max Y Lu, Liesl S Eibschutz, Ali Gholamrezanezhad
{"title":"放射学中人工智能应用的误区。","authors":"Shima Behzad, Seyed M Hossein Tabatabaei, Max Y Lu, Liesl S Eibschutz, Ali Gholamrezanezhad","doi":"10.2214/AJR.24.31493","DOIUrl":null,"url":null,"abstract":"<p><p>Interpretive artificial intelligence (AI) tools are poised to change the future of radiology. However, certain pitfalls may pose particular challenges for optimal AI interpretative performance. These include anatomic variants, age-related changes, postoperative changes, medical devices, image artifacts, lack of integration of prior and concurrent imaging examinations and clinical information, and the satisfaction-of-search effect. Model training and development should account for such pitfalls to minimize errors and optimize interpretation accuracy. More broadly, AI algorithms should be exposed to diverse and complex training datasets to yield a holistic interpretation that considers all relevant information beyond the individual examination. Successful clinical deployment of AI tools will require that radiologist end users recognize these pitfalls and other limitations of the available models. Furthermore, developers should incorporate explainable AI techniques (e.g., heat maps) into their tools, to improve radiologists' understanding of model outputs and to enable radiologists to provide feedback for guiding continuous learning and iterative refinement. In this article, we provide an overview of common pitfalls that radiologists may encounter when using interpretive AI products in daily practice. We present how such pitfalls lead to AI errors and offer potential strategies that AI developers may use for their mitigation.</p>","PeriodicalId":55529,"journal":{"name":"American Journal of Roentgenology","volume":null,"pages":null},"PeriodicalIF":4.7000,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Pitfalls in Interpretive Applications of Artificial Intelligence in Radiology.\",\"authors\":\"Shima Behzad, Seyed M Hossein Tabatabaei, Max Y Lu, Liesl S Eibschutz, Ali Gholamrezanezhad\",\"doi\":\"10.2214/AJR.24.31493\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Interpretive artificial intelligence (AI) tools are poised to change the future of radiology. However, certain pitfalls may pose particular challenges for optimal AI interpretative performance. These include anatomic variants, age-related changes, postoperative changes, medical devices, image artifacts, lack of integration of prior and concurrent imaging examinations and clinical information, and the satisfaction-of-search effect. Model training and development should account for such pitfalls to minimize errors and optimize interpretation accuracy. More broadly, AI algorithms should be exposed to diverse and complex training datasets to yield a holistic interpretation that considers all relevant information beyond the individual examination. Successful clinical deployment of AI tools will require that radiologist end users recognize these pitfalls and other limitations of the available models. Furthermore, developers should incorporate explainable AI techniques (e.g., heat maps) into their tools, to improve radiologists' understanding of model outputs and to enable radiologists to provide feedback for guiding continuous learning and iterative refinement. In this article, we provide an overview of common pitfalls that radiologists may encounter when using interpretive AI products in daily practice. We present how such pitfalls lead to AI errors and offer potential strategies that AI developers may use for their mitigation.</p>\",\"PeriodicalId\":55529,\"journal\":{\"name\":\"American Journal of Roentgenology\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":4.7000,\"publicationDate\":\"2024-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"American Journal of Roentgenology\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.2214/AJR.24.31493\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/7/24 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q1\",\"JCRName\":\"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"American Journal of Roentgenology","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.2214/AJR.24.31493","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/7/24 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
Pitfalls in Interpretive Applications of Artificial Intelligence in Radiology.
Interpretive artificial intelligence (AI) tools are poised to change the future of radiology. However, certain pitfalls may pose particular challenges for optimal AI interpretative performance. These include anatomic variants, age-related changes, postoperative changes, medical devices, image artifacts, lack of integration of prior and concurrent imaging examinations and clinical information, and the satisfaction-of-search effect. Model training and development should account for such pitfalls to minimize errors and optimize interpretation accuracy. More broadly, AI algorithms should be exposed to diverse and complex training datasets to yield a holistic interpretation that considers all relevant information beyond the individual examination. Successful clinical deployment of AI tools will require that radiologist end users recognize these pitfalls and other limitations of the available models. Furthermore, developers should incorporate explainable AI techniques (e.g., heat maps) into their tools, to improve radiologists' understanding of model outputs and to enable radiologists to provide feedback for guiding continuous learning and iterative refinement. In this article, we provide an overview of common pitfalls that radiologists may encounter when using interpretive AI products in daily practice. We present how such pitfalls lead to AI errors and offer potential strategies that AI developers may use for their mitigation.
期刊介绍:
Founded in 1907, the monthly American Journal of Roentgenology (AJR) is the world’s longest continuously published general radiology journal. AJR is recognized as among the specialty’s leading peer-reviewed journals and has a worldwide circulation of close to 25,000. The journal publishes clinically-oriented articles across all radiology subspecialties, seeking relevance to radiologists’ daily practice. The journal publishes hundreds of articles annually with a diverse range of formats, including original research, reviews, clinical perspectives, editorials, and other short reports. The journal engages its audience through a spectrum of social media and digital communication activities.