Chuting Tang, Nilupul Mudunna, Ian Turner, Mohammad Asghari-Jafarabadi, Keith Joe, Lisa Brichko
{"title":"使用人工智能生成急诊科出院摘要。","authors":"Chuting Tang, Nilupul Mudunna, Ian Turner, Mohammad Asghari-Jafarabadi, Keith Joe, Lisa Brichko","doi":"10.1071/AH24326","DOIUrl":null,"url":null,"abstract":"<p><p>Objective This study aims to evaluate the effectiveness of utilising an artificial intelligence (AI) model to generate emergency department (ED) discharge summaries in an easily accessible format. Methods This single-centre, proof-of-concept trial was conducted at a tertiary metropolitan private hospital. It involved 142 randomly selected patients who attended in 2023 and were able to be discharged home after care by a single ED doctor. A total of 284 documents were randomised, consisting of 142 de-identified ED medical notes and 142 AI-generated discharge summaries created by ChatGPT4 based on the corresponding ED medical notes. Both document types were distributed to six senior ED doctors, each of whom graded them individually and independently using a predetermined tool that assessed 17 items in four domains (expected contents, readability, medical accuracy, and internal consistency). The primary outcome was the graded score for the AI-generated discharge summaries, compared with that of the original ED medical notes. Results Across the 17 items and four domains assessed, AI-generated discharge summaries rated comparably to ED medical notes in 12 items (including key information, reason for the ED visit, past medical history, allergies and medications, social history, history of presenting complaint, investigations, differential diagnoses list, grammar, formatting, appropriateness, and consistency) and three domains (expected contents, readability, and internal consistency). AI-generated discharge summaries demonstrated high mean scores in the remaining five items (examination findings, primary diagnosis, detailed plan, language clarity, and reflectiveness of treatment) and one domain (medical accuracy). Conclusions AI-generated discharge summaries are potentially comparable to ED medical notes in most key performance domains of a discharge summary.</p>","PeriodicalId":93891,"journal":{"name":"Australian health review : a publication of the Australian Hospital Association","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Use of artificial intelligence to generate emergency department discharge summaries.\",\"authors\":\"Chuting Tang, Nilupul Mudunna, Ian Turner, Mohammad Asghari-Jafarabadi, Keith Joe, Lisa Brichko\",\"doi\":\"10.1071/AH24326\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Objective This study aims to evaluate the effectiveness of utilising an artificial intelligence (AI) model to generate emergency department (ED) discharge summaries in an easily accessible format. Methods This single-centre, proof-of-concept trial was conducted at a tertiary metropolitan private hospital. It involved 142 randomly selected patients who attended in 2023 and were able to be discharged home after care by a single ED doctor. A total of 284 documents were randomised, consisting of 142 de-identified ED medical notes and 142 AI-generated discharge summaries created by ChatGPT4 based on the corresponding ED medical notes. Both document types were distributed to six senior ED doctors, each of whom graded them individually and independently using a predetermined tool that assessed 17 items in four domains (expected contents, readability, medical accuracy, and internal consistency). The primary outcome was the graded score for the AI-generated discharge summaries, compared with that of the original ED medical notes. Results Across the 17 items and four domains assessed, AI-generated discharge summaries rated comparably to ED medical notes in 12 items (including key information, reason for the ED visit, past medical history, allergies and medications, social history, history of presenting complaint, investigations, differential diagnoses list, grammar, formatting, appropriateness, and consistency) and three domains (expected contents, readability, and internal consistency). AI-generated discharge summaries demonstrated high mean scores in the remaining five items (examination findings, primary diagnosis, detailed plan, language clarity, and reflectiveness of treatment) and one domain (medical accuracy). Conclusions AI-generated discharge summaries are potentially comparable to ED medical notes in most key performance domains of a discharge summary.</p>\",\"PeriodicalId\":93891,\"journal\":{\"name\":\"Australian health review : a publication of the Australian Hospital Association\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-04-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Australian health review : a publication of the Australian Hospital Association\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1071/AH24326\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Australian health review : a publication of the Australian Hospital Association","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1071/AH24326","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Use of artificial intelligence to generate emergency department discharge summaries.
Objective This study aims to evaluate the effectiveness of utilising an artificial intelligence (AI) model to generate emergency department (ED) discharge summaries in an easily accessible format. Methods This single-centre, proof-of-concept trial was conducted at a tertiary metropolitan private hospital. It involved 142 randomly selected patients who attended in 2023 and were able to be discharged home after care by a single ED doctor. A total of 284 documents were randomised, consisting of 142 de-identified ED medical notes and 142 AI-generated discharge summaries created by ChatGPT4 based on the corresponding ED medical notes. Both document types were distributed to six senior ED doctors, each of whom graded them individually and independently using a predetermined tool that assessed 17 items in four domains (expected contents, readability, medical accuracy, and internal consistency). The primary outcome was the graded score for the AI-generated discharge summaries, compared with that of the original ED medical notes. Results Across the 17 items and four domains assessed, AI-generated discharge summaries rated comparably to ED medical notes in 12 items (including key information, reason for the ED visit, past medical history, allergies and medications, social history, history of presenting complaint, investigations, differential diagnoses list, grammar, formatting, appropriateness, and consistency) and three domains (expected contents, readability, and internal consistency). AI-generated discharge summaries demonstrated high mean scores in the remaining five items (examination findings, primary diagnosis, detailed plan, language clarity, and reflectiveness of treatment) and one domain (medical accuracy). Conclusions AI-generated discharge summaries are potentially comparable to ED medical notes in most key performance domains of a discharge summary.