Evaluating the Application of Artificial Intelligence and Ambient Listening to Generate Medical Notes in Vitreoretinal Clinic Encounters.

Clinical ophthalmology (Auckland, N.Z.) Pub Date : 2025-06-03 eCollection Date: 2025-01-01 DOI:10.2147/OPTH.S513633
Neeket R Patel, Corey R Lacher, Alan Y Huang, Anton Kolomeyer, J Clay Bavinger, Robert M Carroll, Benjamin J Kim, Jonathan C Tsui
{"title":"Evaluating the Application of Artificial Intelligence and Ambient Listening to Generate Medical Notes in Vitreoretinal Clinic Encounters.","authors":"Neeket R Patel, Corey R Lacher, Alan Y Huang, Anton Kolomeyer, J Clay Bavinger, Robert M Carroll, Benjamin J Kim, Jonathan C Tsui","doi":"10.2147/OPTH.S513633","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>Analyze the application of large language models (LLM) to listen to and generate medical documentation in vitreoretinal clinic encounters.</p><p><strong>Subjects: </strong>Two publicly available large language models, Google Gemini 1.0 Pro and Chat GPT 3.5.</p><p><strong>Methods: </strong>Patient-physician dialogues simulating vitreoretinal clinic scenarios were scripted to simulate real-world encounters and recorded for standardization. Two artificial intelligence engines were given the audio files to transcribe the dialogue and produce medical documentation of the encounters. Similarity of the dialogue and LLM transcription was assessed using an online comparability tool. A panel of practicing retina specialists evaluated each generated medical note.</p><p><strong>Main outcome measures: </strong>The number of discrepancies and overall similarity of LLM text compared to scripted patient-physician dialogues, and scoring on the physician documentation quality instrument-9 (PDQI-9) of each medical note by five retina specialists.</p><p><strong>Results: </strong>On average, the documentation produced by AI engines scored 81.5% of total possible points in documentation quality. Similarity between pre-formed dialogue scripts and transcribed encounters was higher for ChatGPT (96.5%) compared to Gemini (90.6%, p<0.01). The mean total PDQI-9 score among all encounters from ChatGPT 3.5 (196.2/225, 87.2%) was significantly greater than Gemini 1.0 Pro (170.4/225, 75.7%, p=0.002).</p><p><strong>Conclusion: </strong>The authors report the aptitude of two popular LLMs (ChatGPT 3.5 and Google Gemini 1.0 Pro) in generating medical notes based on audio recordings of scripted vitreoretinal clinical encounters using a validated medical documentation tool. Artificial intelligence can produce quality vitreoretinal clinic encounter medical notes after listening to patient-physician dialogues despite case complexity and missing encounter variables. The performance of these engines was satisfactory but sometimes included fabricated information. We demonstrate the potential utility of LLMs in reducing the documentation burden on physicians and potentially streamlining patient care.</p>","PeriodicalId":93945,"journal":{"name":"Clinical ophthalmology (Auckland, N.Z.)","volume":"19 ","pages":"1763-1769"},"PeriodicalIF":0.0000,"publicationDate":"2025-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12146405/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Clinical ophthalmology (Auckland, N.Z.)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2147/OPTH.S513633","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Purpose: Analyze the application of large language models (LLM) to listen to and generate medical documentation in vitreoretinal clinic encounters.

Subjects: Two publicly available large language models, Google Gemini 1.0 Pro and Chat GPT 3.5.

Methods: Patient-physician dialogues simulating vitreoretinal clinic scenarios were scripted to simulate real-world encounters and recorded for standardization. Two artificial intelligence engines were given the audio files to transcribe the dialogue and produce medical documentation of the encounters. Similarity of the dialogue and LLM transcription was assessed using an online comparability tool. A panel of practicing retina specialists evaluated each generated medical note.

Main outcome measures: The number of discrepancies and overall similarity of LLM text compared to scripted patient-physician dialogues, and scoring on the physician documentation quality instrument-9 (PDQI-9) of each medical note by five retina specialists.

Results: On average, the documentation produced by AI engines scored 81.5% of total possible points in documentation quality. Similarity between pre-formed dialogue scripts and transcribed encounters was higher for ChatGPT (96.5%) compared to Gemini (90.6%, p<0.01). The mean total PDQI-9 score among all encounters from ChatGPT 3.5 (196.2/225, 87.2%) was significantly greater than Gemini 1.0 Pro (170.4/225, 75.7%, p=0.002).

Conclusion: The authors report the aptitude of two popular LLMs (ChatGPT 3.5 and Google Gemini 1.0 Pro) in generating medical notes based on audio recordings of scripted vitreoretinal clinical encounters using a validated medical documentation tool. Artificial intelligence can produce quality vitreoretinal clinic encounter medical notes after listening to patient-physician dialogues despite case complexity and missing encounter variables. The performance of these engines was satisfactory but sometimes included fabricated information. We demonstrate the potential utility of LLMs in reducing the documentation burden on physicians and potentially streamlining patient care.

评估人工智能和环境聆听在玻璃体视网膜诊所会诊中生成医疗笔记的应用。
目的:分析大语言模型(LLM)在玻璃体视网膜临床就诊中聆听和生成医学文献中的应用。主题:两个公开的大型语言模型,谷歌Gemini 1.0 Pro和Chat GPT 3.5。方法:编写模拟玻璃体视网膜临床场景的医患对话脚本,模拟真实世界的遭遇,并记录下来进行标准化。两个人工智能引擎获得了音频文件,用于转录对话并生成遭遇的医疗文件。使用在线可比性工具评估对话和LLM转录的相似性。一个由执业视网膜专家组成的小组对每份生成的医疗记录进行评估。主要结果测量:与脚本化的患者-医生对话相比,LLM文本的差异数量和总体相似性,以及五位视网膜专家对每个医疗记录的医生文档质量工具-9 (PDQI-9)进行评分。结果:平均而言,AI引擎生成的文档在文档质量方面得分为81.5%。ChatGPT中预先形成的对话脚本和转录的对话脚本之间的相似性(96.5%)高于Gemini(90.6%)。结论:作者报告了两种流行的llm (ChatGPT 3.5和谷歌Gemini 1.0 Pro)在使用经过验证的医疗文档工具根据脚本化的玻璃体视网膜临床接触的录音生成医疗记录方面的能力。人工智能可以在听取医患对话后生成高质量的玻璃体视网膜诊所就诊记录,尽管病例复杂且缺少就诊变量。这些发动机的性能令人满意,但有时包括捏造的信息。我们展示了llm在减少医生的文档负担和潜在的简化患者护理方面的潜在效用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
4.10
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信