{"title":"[人工智能的春天:内科病例的人工智能与专家]。","authors":"A. Albaladejo , A. Lorleac’h , J.-S. Allain","doi":"10.1016/j.revmed.2024.01.012","DOIUrl":null,"url":null,"abstract":"<div><h3>Introduction</h3><p>The “Printemps de la Médecine Interne” are training days for Francophone internists. The clinical cases presented during these days are complex. This study aims to evaluate the diagnostic capabilities of non-specialized artificial intelligence (language models) ChatGPT-4 and Bard by confronting them with the puzzles of the “Printemps de la Médecine Interne”.</p></div><div><h3>Method</h3><p>Clinical cases from the “Printemps de la Médecine Interne” 2021 and 2022 were submitted to two language models: ChatGPT-4 and Bard. In case of a wrong answer, a second attempt was offered. We then compared the responses of human internist experts to those of artificial intelligence.</p></div><div><h3>Results</h3><p>Of the 12 clinical cases submitted, human internist experts diagnosed nine, ChatGPT-4 diagnosed three, and Bard diagnosed one. One of the cases solved by ChatGPT-4 was not solved by the internist expert. The artificial intelligence had a response time of a few seconds.</p></div><div><h3>Conclusions</h3><p>Currently, the diagnostic skills of ChatGPT-4 and Bard are inferior to those of human experts in solving complex clinical cases but are very promising. Recently made available to the general public, they already have impressive capabilities, questioning the role of the diagnostic physician. It would be advisable to adapt the rules or subjects of future “Printemps de la Médecine Interne” so that they are not solved by a public language model.</p></div>","PeriodicalId":0,"journal":{"name":"","volume":"45 7","pages":"Pages 409-414"},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0248866324000328/pdfft?md5=a553b4a34e0c183c8b6d87a043f38a57&pid=1-s2.0-S0248866324000328-main.pdf","citationCount":"0","resultStr":"{\"title\":\"Les Printemps de la Médecine Interne : l’intelligence artificielle face aux experts internistes\",\"authors\":\"A. Albaladejo , A. Lorleac’h , J.-S. Allain\",\"doi\":\"10.1016/j.revmed.2024.01.012\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><h3>Introduction</h3><p>The “Printemps de la Médecine Interne” are training days for Francophone internists. The clinical cases presented during these days are complex. This study aims to evaluate the diagnostic capabilities of non-specialized artificial intelligence (language models) ChatGPT-4 and Bard by confronting them with the puzzles of the “Printemps de la Médecine Interne”.</p></div><div><h3>Method</h3><p>Clinical cases from the “Printemps de la Médecine Interne” 2021 and 2022 were submitted to two language models: ChatGPT-4 and Bard. In case of a wrong answer, a second attempt was offered. We then compared the responses of human internist experts to those of artificial intelligence.</p></div><div><h3>Results</h3><p>Of the 12 clinical cases submitted, human internist experts diagnosed nine, ChatGPT-4 diagnosed three, and Bard diagnosed one. One of the cases solved by ChatGPT-4 was not solved by the internist expert. The artificial intelligence had a response time of a few seconds.</p></div><div><h3>Conclusions</h3><p>Currently, the diagnostic skills of ChatGPT-4 and Bard are inferior to those of human experts in solving complex clinical cases but are very promising. Recently made available to the general public, they already have impressive capabilities, questioning the role of the diagnostic physician. It would be advisable to adapt the rules or subjects of future “Printemps de la Médecine Interne” so that they are not solved by a public language model.</p></div>\",\"PeriodicalId\":0,\"journal\":{\"name\":\"\",\"volume\":\"45 7\",\"pages\":\"Pages 409-414\"},\"PeriodicalIF\":0.0,\"publicationDate\":\"2024-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S0248866324000328/pdfft?md5=a553b4a34e0c183c8b6d87a043f38a57&pid=1-s2.0-S0248866324000328-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0248866324000328\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0248866324000328","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
简介内科医师培训日"(Printemps de la Médecine Interne)是法语国家内科医师的培训日。在这些培训日中展示的临床病例非常复杂。本研究旨在通过让非专业人工智能(语言模型)ChatGPT-4 和 Bard 面对 "Printemps de la Médecine Interne "的难题,评估它们的诊断能力:方法:将 "Printemps de la Médecine Interne "2021年和2022年的临床病例提交给两个语言模型:ChatGPT-4 和 Bard。如果回答错误,可进行第二次尝试。然后,我们比较了人类内科专家和人工智能的回答:在提交的 12 个临床病例中,人类内科专家诊断出 9 个,ChatGPT-4 诊断出 3 个,Bard 诊断出 1 个。在 ChatGPT-4 解决的病例中,有一个病例是内科医生专家没有解决的。人工智能的响应时间为几秒钟:目前,ChatGPT-4 和 Bard 的诊断技能在解决复杂临床病例方面不如人类专家,但很有前途。最近,它们向公众开放,已经具备了令人印象深刻的能力,对诊断医生的角色提出了质疑。最好对未来 "Printemps de la Médecine Interne "的规则或主题进行调整,使其不被公共语言模型所解决。
Les Printemps de la Médecine Interne : l’intelligence artificielle face aux experts internistes
Introduction
The “Printemps de la Médecine Interne” are training days for Francophone internists. The clinical cases presented during these days are complex. This study aims to evaluate the diagnostic capabilities of non-specialized artificial intelligence (language models) ChatGPT-4 and Bard by confronting them with the puzzles of the “Printemps de la Médecine Interne”.
Method
Clinical cases from the “Printemps de la Médecine Interne” 2021 and 2022 were submitted to two language models: ChatGPT-4 and Bard. In case of a wrong answer, a second attempt was offered. We then compared the responses of human internist experts to those of artificial intelligence.
Results
Of the 12 clinical cases submitted, human internist experts diagnosed nine, ChatGPT-4 diagnosed three, and Bard diagnosed one. One of the cases solved by ChatGPT-4 was not solved by the internist expert. The artificial intelligence had a response time of a few seconds.
Conclusions
Currently, the diagnostic skills of ChatGPT-4 and Bard are inferior to those of human experts in solving complex clinical cases but are very promising. Recently made available to the general public, they already have impressive capabilities, questioning the role of the diagnostic physician. It would be advisable to adapt the rules or subjects of future “Printemps de la Médecine Interne” so that they are not solved by a public language model.