Challenging the Chatbot: An Assessment of ChatGPT's Diagnoses and Recommendations for DBP Case Studies.

IF 1.8 3区 医学 Q3 BEHAVIORAL SCIENCES
Rachel Kim, Alex Margolis, Joe Barile, Kyle Han, Saia Kalash, Helen Papaioannou, Anna Krevskaya, Ruth Milanaik
{"title":"Challenging the Chatbot: An Assessment of ChatGPT's Diagnoses and Recommendations for DBP Case Studies.","authors":"Rachel Kim, Alex Margolis, Joe Barile, Kyle Han, Saia Kalash, Helen Papaioannou, Anna Krevskaya, Ruth Milanaik","doi":"10.1097/DBP.0000000000001255","DOIUrl":null,"url":null,"abstract":"<p><strong>Objective: </strong>Chat Generative Pretrained Transformer-3.5 (ChatGPT) is a publicly available and free artificial intelligence chatbot that logs billions of visits per day; parents may rely on such tools for developmental and behavioral medical consultations. The objective of this study was to determine how ChatGPT evaluates developmental and behavioral pediatrics (DBP) case studies and makes recommendations and diagnoses.</p><p><strong>Methods: </strong>ChatGPT was asked to list treatment recommendations and a diagnosis for each of 97 DBP case studies. A panel of 3 DBP physicians evaluated ChatGPT's diagnostic accuracy and scored treatment recommendations on accuracy (5-point Likert scale) and completeness (3-point Likert scale). Physicians also assessed whether ChatGPT's treatment plan correctly addressed cultural and ethical issues for relevant cases. Scores were analyzed using Python, and descriptive statistics were computed.</p><p><strong>Results: </strong>The DBP panel agreed with ChatGPT's diagnosis for 66.2% of the case reports. The mean accuracy score of ChatGPT's treatment plan was deemed by physicians to be 4.6 (between entirely correct and more correct than incorrect), and the mean completeness was 2.6 (between complete and adequate). Physicians agreed that ChatGPT addressed relevant cultural issues in 10 out of the 11 appropriate cases and the ethical issues in the single ethical case.</p><p><strong>Conclusion: </strong>While ChatGPT can generate a comprehensive and adequate list of recommendations, the diagnosis accuracy rate is still low. Physicians must advise caution to patients when using such online sources.</p>","PeriodicalId":50215,"journal":{"name":"Journal of Developmental and Behavioral Pediatrics","volume":" ","pages":"e8-e13"},"PeriodicalIF":1.8000,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Developmental and Behavioral Pediatrics","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1097/DBP.0000000000001255","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/2/9 0:00:00","PubModel":"Epub","JCR":"Q3","JCRName":"BEHAVIORAL SCIENCES","Score":null,"Total":0}
引用次数: 0

Abstract

Objective: Chat Generative Pretrained Transformer-3.5 (ChatGPT) is a publicly available and free artificial intelligence chatbot that logs billions of visits per day; parents may rely on such tools for developmental and behavioral medical consultations. The objective of this study was to determine how ChatGPT evaluates developmental and behavioral pediatrics (DBP) case studies and makes recommendations and diagnoses.

Methods: ChatGPT was asked to list treatment recommendations and a diagnosis for each of 97 DBP case studies. A panel of 3 DBP physicians evaluated ChatGPT's diagnostic accuracy and scored treatment recommendations on accuracy (5-point Likert scale) and completeness (3-point Likert scale). Physicians also assessed whether ChatGPT's treatment plan correctly addressed cultural and ethical issues for relevant cases. Scores were analyzed using Python, and descriptive statistics were computed.

Results: The DBP panel agreed with ChatGPT's diagnosis for 66.2% of the case reports. The mean accuracy score of ChatGPT's treatment plan was deemed by physicians to be 4.6 (between entirely correct and more correct than incorrect), and the mean completeness was 2.6 (between complete and adequate). Physicians agreed that ChatGPT addressed relevant cultural issues in 10 out of the 11 appropriate cases and the ethical issues in the single ethical case.

Conclusion: While ChatGPT can generate a comprehensive and adequate list of recommendations, the diagnosis accuracy rate is still low. Physicians must advise caution to patients when using such online sources.

挑战聊天机器人:对 ChatGPT 诊断的评估以及对 DBP 案例研究的建议。
目的Chat Generative Pretrained Transformer-3.5 (ChatGPT) 是一款公开免费的人工智能聊天机器人,每天记录数十亿次访问;家长可能会依赖此类工具进行发育和行为医学咨询。本研究的目的是确定 ChatGPT 如何评估发育和行为儿科(DBP)病例研究并提出建议和诊断:方法:要求 ChatGPT 为 97 个 DBP 病例研究中的每一个列出治疗建议和诊断。由 3 名 DBP 医生组成的小组对 ChatGPT 的诊断准确性进行了评估,并对治疗建议的准确性(5 分制李克特量表)和完整性(3 分制李克特量表)进行了评分。医生们还评估了 ChatGPT 的治疗计划是否正确解决了相关病例的文化和伦理问题。评分使用 Python 进行分析,并计算描述性统计:在 66.2% 的病例报告中,DBP 小组同意 ChatGPT 的诊断。医生认为 ChatGPT 治疗计划的平均准确度为 4.6 分(介于完全正确和正确多于错误之间),平均完整性为 2.6 分(介于完整和充分之间)。医生们一致认为,在 11 个合适的病例中,ChatGPT 解决了 10 个相关的文化问题,而在唯一一个有道德问题的病例中,ChatGPT 解决了道德问题:虽然 ChatGPT 可以生成全面而充分的建议列表,但诊断准确率仍然很低。医生必须建议患者在使用此类在线资源时谨慎行事。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
3.10
自引率
8.30%
发文量
155
审稿时长
6-12 weeks
期刊介绍: Journal of Developmental & Behavioral Pediatrics (JDBP) is a leading resource for clinicians, teachers, and researchers involved in pediatric healthcare and child development. This important journal covers some of the most challenging issues affecting child development and behavior.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信