A quality improvement project of patient perception of AI-generated discharge summaries: a comparison with doctor-written summaries.

IF 1.1 4区 医学 Q3 SURGERY
J Bass, C Bodimeade, N Choudhury
{"title":"A quality improvement project of patient perception of AI-generated discharge summaries: a comparison with doctor-written summaries.","authors":"J Bass, C Bodimeade, N Choudhury","doi":"10.1308/rcsann.2025.0014","DOIUrl":null,"url":null,"abstract":"<p><strong>Introduction: </strong>Every patient admitted to hospital should receive a discharge letter when they leave. Artificial intelligence (AI) has the capability to fulfil this task. Here, we investigate the use of AI to generate discharge letters compared with letters written by a doctor.</p><p><strong>Methods: </strong>Using an AI tool, ChatGPT, we generated two discharge letters for hypothetical elective tonsillectomy patients. We asked the parents of paediatric tonsillectomy patients to blindly compare the AI letters with two anonymised real discharge letters for tonsillectomy patients, written by two ear, nose and throat (ENT) doctors. Participants were asked to rate the quality of medical information, the ease of reading and the length of each of the four discharge letters. They were also asked to deduce who they thought wrote each discharge letter (AI or a doctor).</p><p><strong>Results: </strong>Forty-seven parents responded to the survey. Our results demonstrate that the AI letters were reported to contain significantly better medical information (<i>p</i> = 0.0059) and were significantly easier to read than the doctor-written letters (<i>p</i> < 0.0001). Respondents had a 50% sensitivity in correctly identifying the letters written by AI.</p><p><strong>Conclusions: </strong>AI tools have the potential to write tonsillectomy discharge letters of comparable quality (as perceived by our participant population) to those written by ENT doctors. This study provides preliminary evidence to show that AI-generated discharge letters may be an interesting avenue of further investigation as an application for this tool.</p>","PeriodicalId":8088,"journal":{"name":"Annals of the Royal College of Surgeons of England","volume":" ","pages":""},"PeriodicalIF":1.1000,"publicationDate":"2025-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Annals of the Royal College of Surgeons of England","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1308/rcsann.2025.0014","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"SURGERY","Score":null,"Total":0}
引用次数: 0

Abstract

Introduction: Every patient admitted to hospital should receive a discharge letter when they leave. Artificial intelligence (AI) has the capability to fulfil this task. Here, we investigate the use of AI to generate discharge letters compared with letters written by a doctor.

Methods: Using an AI tool, ChatGPT, we generated two discharge letters for hypothetical elective tonsillectomy patients. We asked the parents of paediatric tonsillectomy patients to blindly compare the AI letters with two anonymised real discharge letters for tonsillectomy patients, written by two ear, nose and throat (ENT) doctors. Participants were asked to rate the quality of medical information, the ease of reading and the length of each of the four discharge letters. They were also asked to deduce who they thought wrote each discharge letter (AI or a doctor).

Results: Forty-seven parents responded to the survey. Our results demonstrate that the AI letters were reported to contain significantly better medical information (p = 0.0059) and were significantly easier to read than the doctor-written letters (p < 0.0001). Respondents had a 50% sensitivity in correctly identifying the letters written by AI.

Conclusions: AI tools have the potential to write tonsillectomy discharge letters of comparable quality (as perceived by our participant population) to those written by ENT doctors. This study provides preliminary evidence to show that AI-generated discharge letters may be an interesting avenue of further investigation as an application for this tool.

患者对人工智能生成的出院总结感知的质量提升项目:与医生撰写的总结的比较
导言:每位住院病人在出院时都应收到一封出院信。人工智能(AI)有能力完成这项任务。在此,我们研究了使用人工智能生成出院信与医生书写出院信的比较:方法:我们使用人工智能工具 ChatGPT 为假定的扁桃体切除术患者生成了两封出院信。我们要求儿科扁桃体切除术患者的家长将人工智能信件与由两名耳鼻喉科(ENT)医生为扁桃体切除术患者撰写的两封匿名真实出院信进行盲比。参与者被要求对四封出院信中每封信的医疗信息质量、易读性和长度进行评分。他们还被要求推断出他们认为每封出院信是谁写的(人工智能还是医生):结果:47 位家长对调查做出了回应。我们的结果表明,人工智能信件所包含的医疗信息要比医生书写的信件好得多(p = 0.0059),也更容易阅读(p < 0.0001)。受访者正确识别人工智能信函的灵敏度为 50%:结论:人工智能工具有可能撰写出与耳鼻喉科医生撰写的质量相当的扁桃体切除术出院医嘱(在我们的受试者看来)。本研究提供的初步证据表明,人工智能生成的出院信可能是进一步研究该工具应用的一个有趣途径。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
2.40
自引率
0.00%
发文量
316
期刊介绍: The Annals of The Royal College of Surgeons of England is the official scholarly research journal of the Royal College of Surgeons and is published eight times a year in January, February, March, April, May, July, September and November. The main aim of the journal is to publish high-quality, peer-reviewed papers that relate to all branches of surgery. The Annals also includes letters and comments, a regular technical section, controversial topics, CORESS feedback and book reviews. The editorial board is composed of experts from all the surgical specialties.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信