Satvik Tripathi , Dana Alkhulaifat MD , Shawn Lyo MD , Rithvik Sukumaran , Bolin Li MS , Vedant Acharya MD , Rafe McBeth PhD , Tessa S. Cook MD, PhD
{"title":"A Hitchhiker's Guide to Good Prompting Practices for Large Language Models in Radiology","authors":"Satvik Tripathi , Dana Alkhulaifat MD , Shawn Lyo MD , Rithvik Sukumaran , Bolin Li MS , Vedant Acharya MD , Rafe McBeth PhD , Tessa S. Cook MD, PhD","doi":"10.1016/j.jacr.2025.02.051","DOIUrl":null,"url":null,"abstract":"<div><div>Large language models (LLMs) are reshaping radiology through their advanced capabilities in tasks such as medical report generation and clinical decision support. However, their effectiveness is heavily influenced by prompt engineering—the design of input prompts that guide the model’s responses. This review aims to illustrate how different prompt engineering techniques, including zero-shot, one-shot, few-shot, chain of thought, and tree of thought, affect LLM performance in a radiology context. In addition, we explore the impact of prompt complexity and temperature settings on the relevance and accuracy of model outputs. This article highlights the importance of precise and iterative prompt design to enhance LLM reliability in radiology, emphasizing the need for methodological rigor and transparency to drive progress and ensure ethical use in health care.</div></div>","PeriodicalId":49044,"journal":{"name":"Journal of the American College of Radiology","volume":"22 7","pages":"Pages 841-847"},"PeriodicalIF":5.1000,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of the American College of Radiology","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1546144025001565","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0
Abstract
Large language models (LLMs) are reshaping radiology through their advanced capabilities in tasks such as medical report generation and clinical decision support. However, their effectiveness is heavily influenced by prompt engineering—the design of input prompts that guide the model’s responses. This review aims to illustrate how different prompt engineering techniques, including zero-shot, one-shot, few-shot, chain of thought, and tree of thought, affect LLM performance in a radiology context. In addition, we explore the impact of prompt complexity and temperature settings on the relevance and accuracy of model outputs. This article highlights the importance of precise and iterative prompt design to enhance LLM reliability in radiology, emphasizing the need for methodological rigor and transparency to drive progress and ensure ethical use in health care.
期刊介绍:
The official journal of the American College of Radiology, JACR informs its readers of timely, pertinent, and important topics affecting the practice of diagnostic radiologists, interventional radiologists, medical physicists, and radiation oncologists. In so doing, JACR improves their practices and helps optimize their role in the health care system. By providing a forum for informative, well-written articles on health policy, clinical practice, practice management, data science, and education, JACR engages readers in a dialogue that ultimately benefits patient care.