{"title":"解码放射学的大型语言模型:微调和提示工程的策略。","authors":"Sanaz Vahdati, Elham Mahmoudi, Ali Ganjizadeh, Chiehju Chao, Bradley J Erickson","doi":"10.1093/radadv/umaf024","DOIUrl":null,"url":null,"abstract":"<p><p>The advances in large language models (LLMs) have demonstrated sophisticated potential for automating complex tasks within the radiology workflow. From radiology report generation and report summarization to data collection for research trials, these models have proven to be powerful tools. However, optimal implementation of these models requires careful adaptation to the specialized medical domain. In addition, these models tend to generate information that is not truthful or factual, which can adversely affect patient care and clinical decisions. Strategies such as fine-tuning and prompt optimization have been shown to be impactful in eliminating these errors. Although these models undergo rapid updates and improvements, understanding the principles of prompt engineering and fine-tuning provides a foundation for evaluating and maintaining the performance of any LLM deployment. The current article aims to review the recent advancements in radiology using fine-tuning and prompt optimization to leverage LLMs' capabilities. It delves into various techniques within each strategy, their advantages and limitations, and presents a framework to facilitate the practical integration of LLMs into radiology settings.</p>","PeriodicalId":519940,"journal":{"name":"Radiology advances","volume":"2 4","pages":"umaf024"},"PeriodicalIF":0.0000,"publicationDate":"2025-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12429228/pdf/","citationCount":"0","resultStr":"{\"title\":\"Decoding large language models for radiology: strategies for fine-tuning and prompt engineering.\",\"authors\":\"Sanaz Vahdati, Elham Mahmoudi, Ali Ganjizadeh, Chiehju Chao, Bradley J Erickson\",\"doi\":\"10.1093/radadv/umaf024\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>The advances in large language models (LLMs) have demonstrated sophisticated potential for automating complex tasks within the radiology workflow. From radiology report generation and report summarization to data collection for research trials, these models have proven to be powerful tools. However, optimal implementation of these models requires careful adaptation to the specialized medical domain. In addition, these models tend to generate information that is not truthful or factual, which can adversely affect patient care and clinical decisions. Strategies such as fine-tuning and prompt optimization have been shown to be impactful in eliminating these errors. Although these models undergo rapid updates and improvements, understanding the principles of prompt engineering and fine-tuning provides a foundation for evaluating and maintaining the performance of any LLM deployment. The current article aims to review the recent advancements in radiology using fine-tuning and prompt optimization to leverage LLMs' capabilities. It delves into various techniques within each strategy, their advantages and limitations, and presents a framework to facilitate the practical integration of LLMs into radiology settings.</p>\",\"PeriodicalId\":519940,\"journal\":{\"name\":\"Radiology advances\",\"volume\":\"2 4\",\"pages\":\"umaf024\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-07-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12429228/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Radiology advances\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1093/radadv/umaf024\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/7/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Radiology advances","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1093/radadv/umaf024","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/7/1 0:00:00","PubModel":"eCollection","JCR":"","JCRName":"","Score":null,"Total":0}
Decoding large language models for radiology: strategies for fine-tuning and prompt engineering.
The advances in large language models (LLMs) have demonstrated sophisticated potential for automating complex tasks within the radiology workflow. From radiology report generation and report summarization to data collection for research trials, these models have proven to be powerful tools. However, optimal implementation of these models requires careful adaptation to the specialized medical domain. In addition, these models tend to generate information that is not truthful or factual, which can adversely affect patient care and clinical decisions. Strategies such as fine-tuning and prompt optimization have been shown to be impactful in eliminating these errors. Although these models undergo rapid updates and improvements, understanding the principles of prompt engineering and fine-tuning provides a foundation for evaluating and maintaining the performance of any LLM deployment. The current article aims to review the recent advancements in radiology using fine-tuning and prompt optimization to leverage LLMs' capabilities. It delves into various techniques within each strategy, their advantages and limitations, and presents a framework to facilitate the practical integration of LLMs into radiology settings.