在放射学中安全使用大型语言模型和其他生成人工智能的最佳实践。

IF 15.2 1区 医学 Q1 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING
Radiology Pub Date : 2025-09-01 DOI:10.1148/radiol.241516
Paul H Yi, Hana L Haver, Jean J Jeudy, Woojin Kim, Felipe C Kitamura, Eniola T Oluyemi, Andrew D Smith, Linda Moy, Vishwa S Parekh
{"title":"在放射学中安全使用大型语言模型和其他生成人工智能的最佳实践。","authors":"Paul H Yi, Hana L Haver, Jean J Jeudy, Woojin Kim, Felipe C Kitamura, Eniola T Oluyemi, Andrew D Smith, Linda Moy, Vishwa S Parekh","doi":"10.1148/radiol.241516","DOIUrl":null,"url":null,"abstract":"<p><p>As large language models (LLMs) and other generative artificial intelligence (AI) models are rapidly integrated into radiology workflows, unique pitfalls threatening their safe use have emerged. Problems with AI are often identified only after public release, highlighting the need for preventive measures to mitigate negative impacts and ensure safe, effective deployment into clinical settings. This article summarizes best practices for the safe use of LLMs and other generative AI models in radiology, focusing on three key areas that can lead to pitfalls if overlooked: regulatory issues, data privacy, and bias. To address these areas and minimize risk to patients, radiologists must examine all potential failure modes and ensure vendor transparency. These best practices are based on the best available evidence and the experiences of leaders in the field. Ultimately, this article provides actionable guidelines for radiologists, radiology departments, and vendors using and integrating generative AI into radiology workflows, offering a framework to prevent these problems.</p>","PeriodicalId":20896,"journal":{"name":"Radiology","volume":"316 3","pages":"e241516"},"PeriodicalIF":15.2000,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12501631/pdf/","citationCount":"0","resultStr":"{\"title\":\"Best Practices for the Safe Use of Large Language Models and Other Generative AI in Radiology.\",\"authors\":\"Paul H Yi, Hana L Haver, Jean J Jeudy, Woojin Kim, Felipe C Kitamura, Eniola T Oluyemi, Andrew D Smith, Linda Moy, Vishwa S Parekh\",\"doi\":\"10.1148/radiol.241516\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>As large language models (LLMs) and other generative artificial intelligence (AI) models are rapidly integrated into radiology workflows, unique pitfalls threatening their safe use have emerged. Problems with AI are often identified only after public release, highlighting the need for preventive measures to mitigate negative impacts and ensure safe, effective deployment into clinical settings. This article summarizes best practices for the safe use of LLMs and other generative AI models in radiology, focusing on three key areas that can lead to pitfalls if overlooked: regulatory issues, data privacy, and bias. To address these areas and minimize risk to patients, radiologists must examine all potential failure modes and ensure vendor transparency. These best practices are based on the best available evidence and the experiences of leaders in the field. Ultimately, this article provides actionable guidelines for radiologists, radiology departments, and vendors using and integrating generative AI into radiology workflows, offering a framework to prevent these problems.</p>\",\"PeriodicalId\":20896,\"journal\":{\"name\":\"Radiology\",\"volume\":\"316 3\",\"pages\":\"e241516\"},\"PeriodicalIF\":15.2000,\"publicationDate\":\"2025-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12501631/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Radiology\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1148/radiol.241516\",\"RegionNum\":1,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Radiology","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1148/radiol.241516","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0

摘要

随着大型语言模型(llm)和其他生成式人工智能(AI)模型迅速集成到放射学工作流程中,威胁其安全使用的独特陷阱已经出现。人工智能的问题往往只有在公开发布后才被发现,这突出表明需要采取预防措施,以减轻负面影响,并确保安全、有效地部署到临床环境中。本文总结了在放射学中安全使用llm和其他生成人工智能模型的最佳实践,重点关注三个关键领域,如果忽视这些领域,可能会导致陷阱:监管问题、数据隐私和偏见。为了解决这些问题并将患者的风险降至最低,放射科医生必须检查所有潜在的故障模式,并确保供应商的透明度。这些最佳做法是基于现有的最佳证据和该领域领导者的经验。最后,本文为放射科医生、放射科和供应商提供了可操作的指导方针,将生成式人工智能集成到放射科工作流程中,提供了一个框架来防止这些问题。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Best Practices for the Safe Use of Large Language Models and Other Generative AI in Radiology.

As large language models (LLMs) and other generative artificial intelligence (AI) models are rapidly integrated into radiology workflows, unique pitfalls threatening their safe use have emerged. Problems with AI are often identified only after public release, highlighting the need for preventive measures to mitigate negative impacts and ensure safe, effective deployment into clinical settings. This article summarizes best practices for the safe use of LLMs and other generative AI models in radiology, focusing on three key areas that can lead to pitfalls if overlooked: regulatory issues, data privacy, and bias. To address these areas and minimize risk to patients, radiologists must examine all potential failure modes and ensure vendor transparency. These best practices are based on the best available evidence and the experiences of leaders in the field. Ultimately, this article provides actionable guidelines for radiologists, radiology departments, and vendors using and integrating generative AI into radiology workflows, offering a framework to prevent these problems.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Radiology
Radiology 医学-核医学
CiteScore
35.20
自引率
3.00%
发文量
596
审稿时长
3.6 months
期刊介绍: Published regularly since 1923 by the Radiological Society of North America (RSNA), Radiology has long been recognized as the authoritative reference for the most current, clinically relevant and highest quality research in the field of radiology. Each month the journal publishes approximately 240 pages of peer-reviewed original research, authoritative reviews, well-balanced commentary on significant articles, and expert opinion on new techniques and technologies. Radiology publishes cutting edge and impactful imaging research articles in radiology and medical imaging in order to help improve human health.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信