ChatGPT versus physician-derived answers to drug-related questions.

IF 1 4区 医学 Q3 MEDICINE, GENERAL & INTERNAL
Ole Kl Helgestad, Astrid J Hjelholt, Søren V Vestergaard, Samuel Azuz, Eva A Sædder, Thure F Overvad
{"title":"ChatGPT versus physician-derived answers to drug-related questions.","authors":"Ole Kl Helgestad, Astrid J Hjelholt, Søren V Vestergaard, Samuel Azuz, Eva A Sædder, Thure F Overvad","doi":"10.61409/A05240360","DOIUrl":null,"url":null,"abstract":"<p><strong>Introduction: </strong>Large language models have recently gained interest within the medical community. Their clinical impact is currently being investigated, with potential application in pharmaceutical counselling, which has yet to be assessed.</p><p><strong>Methods: </strong>We performed a retrospective investigation of ChatGPT 3.5 and 4.0 in response to 49 consecutive inquiries encountered in the joint pharmaceutical counselling service of the Central and North Denmark regions. Answers were rated by comparing them with the answers generated by physicians.</p><p><strong>Results: </strong>ChatGPT 3.5 and 4.0 provided answers rated better or equal in 39 (80%) and 48 (98%) cases, respectively, compared to the pharmaceutical counselling service. References did not accompany answers from ChatGPT, and ChatGPT did not elaborate on what would be considered most clinically relevant when providing multiple answers.</p><p><strong>Conclusions: </strong>In drug-related questions, ChatGPT (4.0) provided answers of a reasonably high quality. The lack of references and an occasionally limited clinical interpretation makes it less useful as a primary source of information.</p><p><strong>Funding: </strong>None.</p><p><strong>Trial registration: </strong>Not relevant.</p>","PeriodicalId":11119,"journal":{"name":"Danish medical journal","volume":"72 1","pages":""},"PeriodicalIF":1.0000,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Danish medical journal","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.61409/A05240360","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"MEDICINE, GENERAL & INTERNAL","Score":null,"Total":0}
引用次数: 0

Abstract

Introduction: Large language models have recently gained interest within the medical community. Their clinical impact is currently being investigated, with potential application in pharmaceutical counselling, which has yet to be assessed.

Methods: We performed a retrospective investigation of ChatGPT 3.5 and 4.0 in response to 49 consecutive inquiries encountered in the joint pharmaceutical counselling service of the Central and North Denmark regions. Answers were rated by comparing them with the answers generated by physicians.

Results: ChatGPT 3.5 and 4.0 provided answers rated better or equal in 39 (80%) and 48 (98%) cases, respectively, compared to the pharmaceutical counselling service. References did not accompany answers from ChatGPT, and ChatGPT did not elaborate on what would be considered most clinically relevant when providing multiple answers.

Conclusions: In drug-related questions, ChatGPT (4.0) provided answers of a reasonably high quality. The lack of references and an occasionally limited clinical interpretation makes it less useful as a primary source of information.

Funding: None.

Trial registration: Not relevant.

ChatGPT 与医生对药物相关问题的回答对比。
大型语言模型最近引起了医学界的兴趣。目前正在调查它们的临床影响,在药物咨询方面的潜在应用还有待评估。方法:我们对在丹麦中部和北部地区联合药物咨询服务中遇到的49个连续询问进行了ChatGPT 3.5和4.0的回顾性调查。通过将答案与医生给出的答案进行比较,对答案进行评分。结果:与药物咨询服务相比,ChatGPT 3.5和4.0分别在39例(80%)和48例(98%)病例中提供了更好或相同的答案。ChatGPT的答案没有参考文献,ChatGPT也没有详细说明当提供多个答案时,什么是最具临床相关性的。结论:在药物相关问题中,ChatGPT(4.0)提供了相当高质量的答案。参考文献的缺乏和偶尔有限的临床解释使其作为主要信息来源的用处不大。资金:没有。试验注册:不相关。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Danish medical journal
Danish medical journal MEDICINE, GENERAL & INTERNAL-
CiteScore
2.30
自引率
6.20%
发文量
78
审稿时长
3-8 weeks
期刊介绍: The Danish Medical Journal (DMJ) is a general medical journal. The journal publish original research in English – conducted in or in relation to the Danish health-care system. When writing for the Danish Medical Journal please remember target audience which is the general reader. This means that the research area should be relevant to many readers and the paper should be presented in a way that most readers will understand the content. DMJ will publish the following articles: • Original articles • Protocol articles from large randomized clinical trials • Systematic reviews and meta-analyses • PhD theses from Danish faculties of health sciences • DMSc theses from Danish faculties of health sciences.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信