关于分娩硬膜外镇痛的信息:对ChatGPT反应的可读性、准确性和质量的最新评估,包括患者偏好和复杂的临床情况

IF 2.6 3区 医学 Q2 ANESTHESIOLOGY
C.W. Tan , J.C.Y. Chan , J.J.I. Chan , S. Nagarajan , B.L. Sng
{"title":"关于分娩硬膜外镇痛的信息:对ChatGPT反应的可读性、准确性和质量的最新评估,包括患者偏好和复杂的临床情况","authors":"C.W. Tan ,&nbsp;J.C.Y. Chan ,&nbsp;J.J.I. Chan ,&nbsp;S. Nagarajan ,&nbsp;B.L. Sng","doi":"10.1016/j.ijoa.2025.104688","DOIUrl":null,"url":null,"abstract":"<div><h3>Background</h3><div>Recent studies evaluating frequently asked questions (FAQs) on labor epidural analgesia (LEA) only used generic questions without incorporating detailed clinical information that reflects patient-specific inputs. We investigated the performance of ChatGPT in addressing these questions related to LEA with an emphasis on individual preferences and clinical conditions.</div></div><div><h3>Methods</h3><div>Twenty-nine questions for the AI chatbot were generated from the commonly asked questions relating to LEA based on clinical conditions. The generation of responses was performed in January 2025 with each question under individual sub-topics initiated as a “New chat” in ChatGPT-4o. Upon having the first questions answered, subsequent question(s) in the same sub-topic were continued in the same chat following the sequences as predefined. The readability of each response was graded using six readability indices, while the accuracy, Patient Education Materials Assessment Tool for Print (PEMAT) understandability and actionability was assessed by four obstetric anesthesiologists.</div></div><div><h3>Results</h3><div>The mean readability indices of the ChatGPT-4o responses to the questions were generally rated as fairly difficult to very difficult, which corresponded to a US grade level between 11th grade to college level entry. The mean (± standard deviation) accuracy of the responses was 97.7% ± 8.1%. The PEMAT understandability and actionability scores were 97.9% ± 0.9%) and 98.0% ± 1.4%), respectively.</div></div><div><h3>Conclusions</h3><div>ChatGPT can provide accurate and readable information about LEA even under different clinical contexts. However, improvement is needed to refine the responses with suitable prompts to simplify the outputs and improve readability. These approaches will thereby meet the need for the effective delivery of reliable patient education information.</div></div>","PeriodicalId":14250,"journal":{"name":"International journal of obstetric anesthesia","volume":"63 ","pages":"Article 104688"},"PeriodicalIF":2.6000,"publicationDate":"2025-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Information about labor epidural analgesia: an updated evaluation on the readability, accuracy, and quality of ChatGPT responses incorporating patient preferences and complex clinical scenarios\",\"authors\":\"C.W. Tan ,&nbsp;J.C.Y. Chan ,&nbsp;J.J.I. Chan ,&nbsp;S. Nagarajan ,&nbsp;B.L. Sng\",\"doi\":\"10.1016/j.ijoa.2025.104688\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><h3>Background</h3><div>Recent studies evaluating frequently asked questions (FAQs) on labor epidural analgesia (LEA) only used generic questions without incorporating detailed clinical information that reflects patient-specific inputs. We investigated the performance of ChatGPT in addressing these questions related to LEA with an emphasis on individual preferences and clinical conditions.</div></div><div><h3>Methods</h3><div>Twenty-nine questions for the AI chatbot were generated from the commonly asked questions relating to LEA based on clinical conditions. The generation of responses was performed in January 2025 with each question under individual sub-topics initiated as a “New chat” in ChatGPT-4o. Upon having the first questions answered, subsequent question(s) in the same sub-topic were continued in the same chat following the sequences as predefined. The readability of each response was graded using six readability indices, while the accuracy, Patient Education Materials Assessment Tool for Print (PEMAT) understandability and actionability was assessed by four obstetric anesthesiologists.</div></div><div><h3>Results</h3><div>The mean readability indices of the ChatGPT-4o responses to the questions were generally rated as fairly difficult to very difficult, which corresponded to a US grade level between 11th grade to college level entry. The mean (± standard deviation) accuracy of the responses was 97.7% ± 8.1%. The PEMAT understandability and actionability scores were 97.9% ± 0.9%) and 98.0% ± 1.4%), respectively.</div></div><div><h3>Conclusions</h3><div>ChatGPT can provide accurate and readable information about LEA even under different clinical contexts. However, improvement is needed to refine the responses with suitable prompts to simplify the outputs and improve readability. These approaches will thereby meet the need for the effective delivery of reliable patient education information.</div></div>\",\"PeriodicalId\":14250,\"journal\":{\"name\":\"International journal of obstetric anesthesia\",\"volume\":\"63 \",\"pages\":\"Article 104688\"},\"PeriodicalIF\":2.6000,\"publicationDate\":\"2025-05-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International journal of obstetric anesthesia\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0959289X25002808\",\"RegionNum\":3,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ANESTHESIOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International journal of obstetric anesthesia","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0959289X25002808","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ANESTHESIOLOGY","Score":null,"Total":0}
引用次数: 0

摘要

背景:最近的研究对分娩硬膜外镇痛(LEA)的常见问题(FAQs)进行了评估,仅使用了一般性问题,而没有纳入反映患者特定输入的详细临床信息。我们调查了ChatGPT在解决与LEA相关的这些问题方面的表现,重点是个人偏好和临床条件。方法根据临床情况,从与LEA相关的常见问题中生成AI聊天机器人的29个问题。应答生成于2025年1月进行,在chatgpt - 40中,每个子主题下的问题作为“新聊天”启动。在回答了第一个问题之后,同一子主题中的后续问题将按照预定义的顺序在同一聊天中继续进行。每个回答的可读性使用6个可读性指标进行评分,而准确性、患者教育材料评估工具(PEMAT)的可理解性和可操作性由4名产科麻醉师进行评估。结果chatgpt - 40回答问题的平均可读性指数一般被评为相当困难到非常困难,这对应于美国11年级到大学入学水平之间的年级水平。平均(±标准差)准确度为97.7%±8.1%。PEMAT可理解性和可操作性评分分别为97.9%±0.9%和98.0%±1.4%。结论即使在不同的临床背景下,schatgpt也能提供准确、可读的LEA信息。但是,需要改进以使用适当的提示来改进响应,以简化输出并提高可读性。因此,这些方法将满足有效提供可靠的患者教育信息的需要。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Information about labor epidural analgesia: an updated evaluation on the readability, accuracy, and quality of ChatGPT responses incorporating patient preferences and complex clinical scenarios

Background

Recent studies evaluating frequently asked questions (FAQs) on labor epidural analgesia (LEA) only used generic questions without incorporating detailed clinical information that reflects patient-specific inputs. We investigated the performance of ChatGPT in addressing these questions related to LEA with an emphasis on individual preferences and clinical conditions.

Methods

Twenty-nine questions for the AI chatbot were generated from the commonly asked questions relating to LEA based on clinical conditions. The generation of responses was performed in January 2025 with each question under individual sub-topics initiated as a “New chat” in ChatGPT-4o. Upon having the first questions answered, subsequent question(s) in the same sub-topic were continued in the same chat following the sequences as predefined. The readability of each response was graded using six readability indices, while the accuracy, Patient Education Materials Assessment Tool for Print (PEMAT) understandability and actionability was assessed by four obstetric anesthesiologists.

Results

The mean readability indices of the ChatGPT-4o responses to the questions were generally rated as fairly difficult to very difficult, which corresponded to a US grade level between 11th grade to college level entry. The mean (± standard deviation) accuracy of the responses was 97.7% ± 8.1%. The PEMAT understandability and actionability scores were 97.9% ± 0.9%) and 98.0% ± 1.4%), respectively.

Conclusions

ChatGPT can provide accurate and readable information about LEA even under different clinical contexts. However, improvement is needed to refine the responses with suitable prompts to simplify the outputs and improve readability. These approaches will thereby meet the need for the effective delivery of reliable patient education information.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
4.70
自引率
7.10%
发文量
285
审稿时长
58 days
期刊介绍: The International Journal of Obstetric Anesthesia is the only journal publishing original articles devoted exclusively to obstetric anesthesia and bringing together all three of its principal components; anesthesia care for operative delivery and the perioperative period, pain relief in labour and care of the critically ill obstetric patient. • Original research (both clinical and laboratory), short reports and case reports will be considered. • The journal also publishes invited review articles and debates on topical and controversial subjects in the area of obstetric anesthesia. • Articles on related topics such as perinatal physiology and pharmacology and all subjects of importance to obstetric anaesthetists/anesthesiologists are also welcome. The journal is peer-reviewed by international experts. Scholarship is stressed to include the focus on discovery, application of knowledge across fields, and informing the medical community. Through the peer-review process, we hope to attest to the quality of scholarships and guide the Journal to extend and transform knowledge in this important and expanding area.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信