评估人工智能聊天机器人对骨肉瘤常见患者问题的回应。

IF 2 3区 医学 Q3 ONCOLOGY
Kameel Khabaz, Nicole J Newman-Hung, Jennifer R Kallini, Joseph Kendal, Alexander B Christ, Nicholas M Bernthal, Lauren E Wessel
{"title":"评估人工智能聊天机器人对骨肉瘤常见患者问题的回应。","authors":"Kameel Khabaz, Nicole J Newman-Hung, Jennifer R Kallini, Joseph Kendal, Alexander B Christ, Nicholas M Bernthal, Lauren E Wessel","doi":"10.1002/jso.27966","DOIUrl":null,"url":null,"abstract":"<p><strong>Background and objectives: </strong>The potential impacts of artificial intelligence (AI) chatbots on care for patients with bone sarcoma is poorly understood. Elucidating potential risks and benefits would allow surgeons to define appropriate roles for these tools in clinical care.</p><p><strong>Methods: </strong>Eleven questions on bone sarcoma diagnosis, treatment, and recovery were inputted into three AI chatbots. Answers were assessed on a 5-point Likert scale for five clinical accuracy metrics: relevance to the question, balance and lack of bias, basis on established data, factual accuracy, and completeness in scope. Responses were quantitatively assessed for empathy and readability. The Patient Education Materials Assessment Tool (PEMAT) was assessed for understandability and actionability.</p><p><strong>Results: </strong>Chatbots scored highly on relevance (4.24) and balance/lack of bias (4.09) but lower on basing responses on established data (3.77), completeness (3.68), and factual accuracy (3.66). Responses generally scored well on understandability (84.30%), while actionability scores were low for questions on treatment (64.58%) and recovery (60.64%). GPT-4 exhibited the highest empathy (4.12). Readability scores averaged between 10.28 for diagnosis questions to 11.65 for recovery questions.</p><p><strong>Conclusions: </strong>While AI chatbots are promising tools, current limitations in factual accuracy and completeness, as well as concerns of inaccessibility to populations with lower health literacy, may significantly limit their clinical utility.</p>","PeriodicalId":17111,"journal":{"name":"Journal of Surgical Oncology","volume":" ","pages":""},"PeriodicalIF":2.0000,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Assessment of Artificial Intelligence Chatbot Responses to Common Patient Questions on Bone Sarcoma.\",\"authors\":\"Kameel Khabaz, Nicole J Newman-Hung, Jennifer R Kallini, Joseph Kendal, Alexander B Christ, Nicholas M Bernthal, Lauren E Wessel\",\"doi\":\"10.1002/jso.27966\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background and objectives: </strong>The potential impacts of artificial intelligence (AI) chatbots on care for patients with bone sarcoma is poorly understood. Elucidating potential risks and benefits would allow surgeons to define appropriate roles for these tools in clinical care.</p><p><strong>Methods: </strong>Eleven questions on bone sarcoma diagnosis, treatment, and recovery were inputted into three AI chatbots. Answers were assessed on a 5-point Likert scale for five clinical accuracy metrics: relevance to the question, balance and lack of bias, basis on established data, factual accuracy, and completeness in scope. Responses were quantitatively assessed for empathy and readability. The Patient Education Materials Assessment Tool (PEMAT) was assessed for understandability and actionability.</p><p><strong>Results: </strong>Chatbots scored highly on relevance (4.24) and balance/lack of bias (4.09) but lower on basing responses on established data (3.77), completeness (3.68), and factual accuracy (3.66). Responses generally scored well on understandability (84.30%), while actionability scores were low for questions on treatment (64.58%) and recovery (60.64%). GPT-4 exhibited the highest empathy (4.12). Readability scores averaged between 10.28 for diagnosis questions to 11.65 for recovery questions.</p><p><strong>Conclusions: </strong>While AI chatbots are promising tools, current limitations in factual accuracy and completeness, as well as concerns of inaccessibility to populations with lower health literacy, may significantly limit their clinical utility.</p>\",\"PeriodicalId\":17111,\"journal\":{\"name\":\"Journal of Surgical Oncology\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":2.0000,\"publicationDate\":\"2024-10-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Surgical Oncology\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1002/jso.27966\",\"RegionNum\":3,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"ONCOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Surgical Oncology","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1002/jso.27966","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ONCOLOGY","Score":null,"Total":0}
引用次数: 0

摘要

背景和目的:人们对人工智能(AI)聊天机器人对骨肉瘤患者护理的潜在影响知之甚少。阐明潜在的风险和益处将有助于外科医生确定这些工具在临床护理中的适当作用:方法:将有关骨肉瘤诊断、治疗和康复的 11 个问题输入三个人工智能聊天机器人。回答以 5 分制李克特量表对五个临床准确性指标进行评估:与问题的相关性、平衡和无偏见、基于已有数据、事实准确性和范围完整性。还对回复的共鸣性和可读性进行了量化评估。患者教育材料评估工具(PEMAT)对可理解性和可操作性进行了评估:聊天机器人在相关性(4.24 分)和平衡/无偏见(4.09 分)方面得分较高,但在基于既定数据的回复(3.77 分)、完整性(3.68 分)和事实准确性(3.66 分)方面得分较低。回答的可理解性得分普遍较高(84.30%),而有关治疗(64.58%)和康复(60.64%)的问题的可操作性得分较低。GPT-4 的移情能力最高(4.12)。诊断问题的可读性得分平均为 10.28 分,康复问题的可读性得分平均为 11.65 分:虽然人工智能聊天机器人是很有前途的工具,但目前在事实准确性和完整性方面的局限性,以及对健康素养较低人群的不可读性,可能会大大限制其临床实用性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Assessment of Artificial Intelligence Chatbot Responses to Common Patient Questions on Bone Sarcoma.

Background and objectives: The potential impacts of artificial intelligence (AI) chatbots on care for patients with bone sarcoma is poorly understood. Elucidating potential risks and benefits would allow surgeons to define appropriate roles for these tools in clinical care.

Methods: Eleven questions on bone sarcoma diagnosis, treatment, and recovery were inputted into three AI chatbots. Answers were assessed on a 5-point Likert scale for five clinical accuracy metrics: relevance to the question, balance and lack of bias, basis on established data, factual accuracy, and completeness in scope. Responses were quantitatively assessed for empathy and readability. The Patient Education Materials Assessment Tool (PEMAT) was assessed for understandability and actionability.

Results: Chatbots scored highly on relevance (4.24) and balance/lack of bias (4.09) but lower on basing responses on established data (3.77), completeness (3.68), and factual accuracy (3.66). Responses generally scored well on understandability (84.30%), while actionability scores were low for questions on treatment (64.58%) and recovery (60.64%). GPT-4 exhibited the highest empathy (4.12). Readability scores averaged between 10.28 for diagnosis questions to 11.65 for recovery questions.

Conclusions: While AI chatbots are promising tools, current limitations in factual accuracy and completeness, as well as concerns of inaccessibility to populations with lower health literacy, may significantly limit their clinical utility.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
4.70
自引率
4.00%
发文量
367
审稿时长
2 months
期刊介绍: The Journal of Surgical Oncology offers peer-reviewed, original papers in the field of surgical oncology and broadly related surgical sciences, including reports on experimental and laboratory studies. As an international journal, the editors encourage participation from leading surgeons around the world. The JSO is the representative journal for the World Federation of Surgical Oncology Societies. Publishing 16 issues in 2 volumes each year, the journal accepts Research Articles, in-depth Reviews of timely interest, Letters to the Editor, and invited Editorials. Guest Editors from the JSO Editorial Board oversee multiple special Seminars issues each year. These Seminars include multifaceted Reviews on a particular topic or current issue in surgical oncology, which are invited from experts in the field.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信