[人工智能的春天:内科病例的人工智能与专家]。

Pub Date : 2024-07-01 DOI:10.1016/j.revmed.2024.01.012
A. Albaladejo , A. Lorleac’h , J.-S. Allain
{"title":"[人工智能的春天:内科病例的人工智能与专家]。","authors":"A. Albaladejo ,&nbsp;A. Lorleac’h ,&nbsp;J.-S. Allain","doi":"10.1016/j.revmed.2024.01.012","DOIUrl":null,"url":null,"abstract":"<div><h3>Introduction</h3><p>The “Printemps de la Médecine Interne” are training days for Francophone internists. The clinical cases presented during these days are complex. This study aims to evaluate the diagnostic capabilities of non-specialized artificial intelligence (language models) ChatGPT-4 and Bard by confronting them with the puzzles of the “Printemps de la Médecine Interne”.</p></div><div><h3>Method</h3><p>Clinical cases from the “Printemps de la Médecine Interne” 2021 and 2022 were submitted to two language models: ChatGPT-4 and Bard. In case of a wrong answer, a second attempt was offered. We then compared the responses of human internist experts to those of artificial intelligence.</p></div><div><h3>Results</h3><p>Of the 12 clinical cases submitted, human internist experts diagnosed nine, ChatGPT-4 diagnosed three, and Bard diagnosed one. One of the cases solved by ChatGPT-4 was not solved by the internist expert. The artificial intelligence had a response time of a few seconds.</p></div><div><h3>Conclusions</h3><p>Currently, the diagnostic skills of ChatGPT-4 and Bard are inferior to those of human experts in solving complex clinical cases but are very promising. Recently made available to the general public, they already have impressive capabilities, questioning the role of the diagnostic physician. It would be advisable to adapt the rules or subjects of future “Printemps de la Médecine Interne” so that they are not solved by a public language model.</p></div>","PeriodicalId":0,"journal":{"name":"","volume":"45 7","pages":"Pages 409-414"},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0248866324000328/pdfft?md5=a553b4a34e0c183c8b6d87a043f38a57&pid=1-s2.0-S0248866324000328-main.pdf","citationCount":"0","resultStr":"{\"title\":\"Les Printemps de la Médecine Interne : l’intelligence artificielle face aux experts internistes\",\"authors\":\"A. Albaladejo ,&nbsp;A. Lorleac’h ,&nbsp;J.-S. Allain\",\"doi\":\"10.1016/j.revmed.2024.01.012\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><h3>Introduction</h3><p>The “Printemps de la Médecine Interne” are training days for Francophone internists. The clinical cases presented during these days are complex. This study aims to evaluate the diagnostic capabilities of non-specialized artificial intelligence (language models) ChatGPT-4 and Bard by confronting them with the puzzles of the “Printemps de la Médecine Interne”.</p></div><div><h3>Method</h3><p>Clinical cases from the “Printemps de la Médecine Interne” 2021 and 2022 were submitted to two language models: ChatGPT-4 and Bard. In case of a wrong answer, a second attempt was offered. We then compared the responses of human internist experts to those of artificial intelligence.</p></div><div><h3>Results</h3><p>Of the 12 clinical cases submitted, human internist experts diagnosed nine, ChatGPT-4 diagnosed three, and Bard diagnosed one. One of the cases solved by ChatGPT-4 was not solved by the internist expert. The artificial intelligence had a response time of a few seconds.</p></div><div><h3>Conclusions</h3><p>Currently, the diagnostic skills of ChatGPT-4 and Bard are inferior to those of human experts in solving complex clinical cases but are very promising. Recently made available to the general public, they already have impressive capabilities, questioning the role of the diagnostic physician. It would be advisable to adapt the rules or subjects of future “Printemps de la Médecine Interne” so that they are not solved by a public language model.</p></div>\",\"PeriodicalId\":0,\"journal\":{\"name\":\"\",\"volume\":\"45 7\",\"pages\":\"Pages 409-414\"},\"PeriodicalIF\":0.0,\"publicationDate\":\"2024-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S0248866324000328/pdfft?md5=a553b4a34e0c183c8b6d87a043f38a57&pid=1-s2.0-S0248866324000328-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0248866324000328\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0248866324000328","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

简介内科医师培训日"(Printemps de la Médecine Interne)是法语国家内科医师的培训日。在这些培训日中展示的临床病例非常复杂。本研究旨在通过让非专业人工智能(语言模型)ChatGPT-4 和 Bard 面对 "Printemps de la Médecine Interne "的难题,评估它们的诊断能力:方法:将 "Printemps de la Médecine Interne "2021年和2022年的临床病例提交给两个语言模型:ChatGPT-4 和 Bard。如果回答错误,可进行第二次尝试。然后,我们比较了人类内科专家和人工智能的回答:在提交的 12 个临床病例中,人类内科专家诊断出 9 个,ChatGPT-4 诊断出 3 个,Bard 诊断出 1 个。在 ChatGPT-4 解决的病例中,有一个病例是内科医生专家没有解决的。人工智能的响应时间为几秒钟:目前,ChatGPT-4 和 Bard 的诊断技能在解决复杂临床病例方面不如人类专家,但很有前途。最近,它们向公众开放,已经具备了令人印象深刻的能力,对诊断医生的角色提出了质疑。最好对未来 "Printemps de la Médecine Interne "的规则或主题进行调整,使其不被公共语言模型所解决。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
分享
查看原文
Les Printemps de la Médecine Interne : l’intelligence artificielle face aux experts internistes

Introduction

The “Printemps de la Médecine Interne” are training days for Francophone internists. The clinical cases presented during these days are complex. This study aims to evaluate the diagnostic capabilities of non-specialized artificial intelligence (language models) ChatGPT-4 and Bard by confronting them with the puzzles of the “Printemps de la Médecine Interne”.

Method

Clinical cases from the “Printemps de la Médecine Interne” 2021 and 2022 were submitted to two language models: ChatGPT-4 and Bard. In case of a wrong answer, a second attempt was offered. We then compared the responses of human internist experts to those of artificial intelligence.

Results

Of the 12 clinical cases submitted, human internist experts diagnosed nine, ChatGPT-4 diagnosed three, and Bard diagnosed one. One of the cases solved by ChatGPT-4 was not solved by the internist expert. The artificial intelligence had a response time of a few seconds.

Conclusions

Currently, the diagnostic skills of ChatGPT-4 and Bard are inferior to those of human experts in solving complex clinical cases but are very promising. Recently made available to the general public, they already have impressive capabilities, questioning the role of the diagnostic physician. It would be advisable to adapt the rules or subjects of future “Printemps de la Médecine Interne” so that they are not solved by a public language model.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信