多模态放射学AI

Pingkun Yan, Ge Wang, Hanqing Chao, Mannudeep K. Kalra
{"title":"多模态放射学AI","authors":"Pingkun Yan,&nbsp;Ge Wang,&nbsp;Hanqing Chao,&nbsp;Mannudeep K. Kalra","doi":"10.1016/j.metrad.2023.100019","DOIUrl":null,"url":null,"abstract":"<div><p>The growing armamentarium of artificial intelligence (AI) tools cleared by the United States Food and Drug Administration mostly target a narrow, single imaging modality or data source of information. While imaging technologies continue evolving rapidly, it is recognized that multimodal data provides synergistic information and enables better performance than what is achievable when these modalities are used separately. Deep learning approaches can integrate multimodal data, including not only imaging but also non-imaging modalities such as electronic medical records (EMRs) and genetic profiles. Such convergence advances clinical applications and research for improved effectiveness, especially the prediction of disease risks. This new avenue could address concerns over justification of imaging scans, clinical context-based interpretation of examinations, effectiveness of single modal and multimodal data to influence clinical decision making, as well as prediction of personalized disease risk. In this new era of radiology AI, the paradigm is being shifted from imaging alone AI analytics to multimodal artificial general intelligence (AGI). The heterogeneity of the data and the non-intuitive nature of certain modalities pose major challenges for developing multimodal large AI models and at the same time bring enormous opportunities.</p></div>","PeriodicalId":100921,"journal":{"name":"Meta-Radiology","volume":"1 2","pages":"Article 100019"},"PeriodicalIF":0.0000,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Multimodal radiology AI\",\"authors\":\"Pingkun Yan,&nbsp;Ge Wang,&nbsp;Hanqing Chao,&nbsp;Mannudeep K. Kalra\",\"doi\":\"10.1016/j.metrad.2023.100019\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>The growing armamentarium of artificial intelligence (AI) tools cleared by the United States Food and Drug Administration mostly target a narrow, single imaging modality or data source of information. While imaging technologies continue evolving rapidly, it is recognized that multimodal data provides synergistic information and enables better performance than what is achievable when these modalities are used separately. Deep learning approaches can integrate multimodal data, including not only imaging but also non-imaging modalities such as electronic medical records (EMRs) and genetic profiles. Such convergence advances clinical applications and research for improved effectiveness, especially the prediction of disease risks. This new avenue could address concerns over justification of imaging scans, clinical context-based interpretation of examinations, effectiveness of single modal and multimodal data to influence clinical decision making, as well as prediction of personalized disease risk. In this new era of radiology AI, the paradigm is being shifted from imaging alone AI analytics to multimodal artificial general intelligence (AGI). The heterogeneity of the data and the non-intuitive nature of certain modalities pose major challenges for developing multimodal large AI models and at the same time bring enormous opportunities.</p></div>\",\"PeriodicalId\":100921,\"journal\":{\"name\":\"Meta-Radiology\",\"volume\":\"1 2\",\"pages\":\"Article 100019\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Meta-Radiology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S295016282300019X\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Meta-Radiology","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S295016282300019X","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

美国食品和药物管理局批准的越来越多的人工智能工具主要针对狭窄、单一的成像模式或信息数据源。虽然成像技术继续快速发展,但人们认识到,多模式数据提供了协同信息,并实现了比单独使用这些模式时更好的性能。深度学习方法可以集成多模式数据,不仅包括成像,还包括非成像模式,如电子病历和基因图谱。这种融合促进了临床应用和研究,以提高有效性,尤其是疾病风险的预测。这一新途径可以解决人们对成像扫描合理性、基于临床背景的检查解释、单模态和多模态数据影响临床决策的有效性以及个性化疾病风险预测的担忧。在这个放射学人工智能的新时代,范式正在从单成像人工智能分析转向多模式通用人工智能(AGI)。数据的异质性和某些模式的非直观性给开发多模式大型人工智能模型带来了重大挑战,同时也带来了巨大的机遇。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Multimodal radiology AI

The growing armamentarium of artificial intelligence (AI) tools cleared by the United States Food and Drug Administration mostly target a narrow, single imaging modality or data source of information. While imaging technologies continue evolving rapidly, it is recognized that multimodal data provides synergistic information and enables better performance than what is achievable when these modalities are used separately. Deep learning approaches can integrate multimodal data, including not only imaging but also non-imaging modalities such as electronic medical records (EMRs) and genetic profiles. Such convergence advances clinical applications and research for improved effectiveness, especially the prediction of disease risks. This new avenue could address concerns over justification of imaging scans, clinical context-based interpretation of examinations, effectiveness of single modal and multimodal data to influence clinical decision making, as well as prediction of personalized disease risk. In this new era of radiology AI, the paradigm is being shifted from imaging alone AI analytics to multimodal artificial general intelligence (AGI). The heterogeneity of the data and the non-intuitive nature of certain modalities pose major challenges for developing multimodal large AI models and at the same time bring enormous opportunities.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信