Advancing radiology practice and research: harnessing the potential of large language models amidst imperfections.

BJR open Pub Date : 2024-08-14 eCollection Date: 2024-01-01 DOI:10.1093/bjro/tzae022
Eyal Klang, Lee Alper, Vera Sorin, Yiftach Barash, Girish N Nadkarni, Eyal Zimlichman
{"title":"Advancing radiology practice and research: harnessing the potential of large language models amidst imperfections.","authors":"Eyal Klang, Lee Alper, Vera Sorin, Yiftach Barash, Girish N Nadkarni, Eyal Zimlichman","doi":"10.1093/bjro/tzae022","DOIUrl":null,"url":null,"abstract":"<p><p>Large language models (LLMs) are transforming the field of natural language processing (NLP). These models offer opportunities for radiologists to make a meaningful impact in their field. NLP is a part of artificial intelligence (AI) that uses computer algorithms to study and understand text data. Recent advances in NLP include the Attention mechanism and the Transformer architecture. Transformer-based LLMs, such as GPT-4 and Gemini, are trained on massive amounts of data and generate human-like text. They are ideal for analysing large text data in academic research and clinical practice in radiology. Despite their promise, LLMs have limitations, including their dependency on the diversity and quality of their training data and the potential for false outputs. Albeit these limitations, the use of LLMs in radiology holds promise and is gaining momentum. By embracing the potential of LLMs, radiologists can gain valuable insights and improve the efficiency of their work. This can ultimately lead to improved patient care.</p>","PeriodicalId":72419,"journal":{"name":"BJR open","volume":"6 1","pages":"tzae022"},"PeriodicalIF":0.0000,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11349187/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"BJR open","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1093/bjro/tzae022","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/1/1 0:00:00","PubModel":"eCollection","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Large language models (LLMs) are transforming the field of natural language processing (NLP). These models offer opportunities for radiologists to make a meaningful impact in their field. NLP is a part of artificial intelligence (AI) that uses computer algorithms to study and understand text data. Recent advances in NLP include the Attention mechanism and the Transformer architecture. Transformer-based LLMs, such as GPT-4 and Gemini, are trained on massive amounts of data and generate human-like text. They are ideal for analysing large text data in academic research and clinical practice in radiology. Despite their promise, LLMs have limitations, including their dependency on the diversity and quality of their training data and the potential for false outputs. Albeit these limitations, the use of LLMs in radiology holds promise and is gaining momentum. By embracing the potential of LLMs, radiologists can gain valuable insights and improve the efficiency of their work. This can ultimately lead to improved patient care.

推进放射学实践与研究:在不完善中利用大型语言模型的潜力。
大型语言模型(LLM)正在改变自然语言处理(NLP)领域。这些模型为放射科医生提供了在自己的领域发挥有意义影响的机会。NLP 是人工智能 (AI) 的一部分,它使用计算机算法来研究和理解文本数据。NLP 的最新进展包括注意力机制和 Transformer 架构。基于 Transformer 的 LLM(如 GPT-4 和 Gemini)可在海量数据上进行训练,并生成类人文本。它们是学术研究和放射学临床实践中分析大量文本数据的理想选择。尽管 LLMs 前景广阔,但也有其局限性,包括对训练数据的多样性和质量的依赖性,以及产生错误输出的可能性。尽管存在这些局限性,但在放射学中使用 LLMs 仍大有可为,而且发展势头越来越好。通过利用 LLMs 的潜力,放射科医生可以获得有价值的见解并提高工作效率。这最终会改善对病人的护理。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
审稿时长
18 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信