Neeket R Patel, Corey R Lacher, Alan Y Huang, Anton Kolomeyer, J Clay Bavinger, Robert M Carroll, Benjamin J Kim, Jonathan C Tsui
{"title":"评估人工智能和环境聆听在玻璃体视网膜诊所会诊中生成医疗笔记的应用。","authors":"Neeket R Patel, Corey R Lacher, Alan Y Huang, Anton Kolomeyer, J Clay Bavinger, Robert M Carroll, Benjamin J Kim, Jonathan C Tsui","doi":"10.2147/OPTH.S513633","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>Analyze the application of large language models (LLM) to listen to and generate medical documentation in vitreoretinal clinic encounters.</p><p><strong>Subjects: </strong>Two publicly available large language models, Google Gemini 1.0 Pro and Chat GPT 3.5.</p><p><strong>Methods: </strong>Patient-physician dialogues simulating vitreoretinal clinic scenarios were scripted to simulate real-world encounters and recorded for standardization. Two artificial intelligence engines were given the audio files to transcribe the dialogue and produce medical documentation of the encounters. Similarity of the dialogue and LLM transcription was assessed using an online comparability tool. A panel of practicing retina specialists evaluated each generated medical note.</p><p><strong>Main outcome measures: </strong>The number of discrepancies and overall similarity of LLM text compared to scripted patient-physician dialogues, and scoring on the physician documentation quality instrument-9 (PDQI-9) of each medical note by five retina specialists.</p><p><strong>Results: </strong>On average, the documentation produced by AI engines scored 81.5% of total possible points in documentation quality. Similarity between pre-formed dialogue scripts and transcribed encounters was higher for ChatGPT (96.5%) compared to Gemini (90.6%, p<0.01). The mean total PDQI-9 score among all encounters from ChatGPT 3.5 (196.2/225, 87.2%) was significantly greater than Gemini 1.0 Pro (170.4/225, 75.7%, p=0.002).</p><p><strong>Conclusion: </strong>The authors report the aptitude of two popular LLMs (ChatGPT 3.5 and Google Gemini 1.0 Pro) in generating medical notes based on audio recordings of scripted vitreoretinal clinical encounters using a validated medical documentation tool. Artificial intelligence can produce quality vitreoretinal clinic encounter medical notes after listening to patient-physician dialogues despite case complexity and missing encounter variables. The performance of these engines was satisfactory but sometimes included fabricated information. We demonstrate the potential utility of LLMs in reducing the documentation burden on physicians and potentially streamlining patient care.</p>","PeriodicalId":93945,"journal":{"name":"Clinical ophthalmology (Auckland, N.Z.)","volume":"19 ","pages":"1763-1769"},"PeriodicalIF":0.0000,"publicationDate":"2025-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12146405/pdf/","citationCount":"0","resultStr":"{\"title\":\"Evaluating the Application of Artificial Intelligence and Ambient Listening to Generate Medical Notes in Vitreoretinal Clinic Encounters.\",\"authors\":\"Neeket R Patel, Corey R Lacher, Alan Y Huang, Anton Kolomeyer, J Clay Bavinger, Robert M Carroll, Benjamin J Kim, Jonathan C Tsui\",\"doi\":\"10.2147/OPTH.S513633\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Purpose: </strong>Analyze the application of large language models (LLM) to listen to and generate medical documentation in vitreoretinal clinic encounters.</p><p><strong>Subjects: </strong>Two publicly available large language models, Google Gemini 1.0 Pro and Chat GPT 3.5.</p><p><strong>Methods: </strong>Patient-physician dialogues simulating vitreoretinal clinic scenarios were scripted to simulate real-world encounters and recorded for standardization. Two artificial intelligence engines were given the audio files to transcribe the dialogue and produce medical documentation of the encounters. Similarity of the dialogue and LLM transcription was assessed using an online comparability tool. A panel of practicing retina specialists evaluated each generated medical note.</p><p><strong>Main outcome measures: </strong>The number of discrepancies and overall similarity of LLM text compared to scripted patient-physician dialogues, and scoring on the physician documentation quality instrument-9 (PDQI-9) of each medical note by five retina specialists.</p><p><strong>Results: </strong>On average, the documentation produced by AI engines scored 81.5% of total possible points in documentation quality. Similarity between pre-formed dialogue scripts and transcribed encounters was higher for ChatGPT (96.5%) compared to Gemini (90.6%, p<0.01). The mean total PDQI-9 score among all encounters from ChatGPT 3.5 (196.2/225, 87.2%) was significantly greater than Gemini 1.0 Pro (170.4/225, 75.7%, p=0.002).</p><p><strong>Conclusion: </strong>The authors report the aptitude of two popular LLMs (ChatGPT 3.5 and Google Gemini 1.0 Pro) in generating medical notes based on audio recordings of scripted vitreoretinal clinical encounters using a validated medical documentation tool. Artificial intelligence can produce quality vitreoretinal clinic encounter medical notes after listening to patient-physician dialogues despite case complexity and missing encounter variables. The performance of these engines was satisfactory but sometimes included fabricated information. We demonstrate the potential utility of LLMs in reducing the documentation burden on physicians and potentially streamlining patient care.</p>\",\"PeriodicalId\":93945,\"journal\":{\"name\":\"Clinical ophthalmology (Auckland, N.Z.)\",\"volume\":\"19 \",\"pages\":\"1763-1769\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-06-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12146405/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Clinical ophthalmology (Auckland, N.Z.)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.2147/OPTH.S513633\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/1/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Clinical ophthalmology (Auckland, N.Z.)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2147/OPTH.S513633","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"","JCRName":"","Score":null,"Total":0}
Evaluating the Application of Artificial Intelligence and Ambient Listening to Generate Medical Notes in Vitreoretinal Clinic Encounters.
Purpose: Analyze the application of large language models (LLM) to listen to and generate medical documentation in vitreoretinal clinic encounters.
Subjects: Two publicly available large language models, Google Gemini 1.0 Pro and Chat GPT 3.5.
Methods: Patient-physician dialogues simulating vitreoretinal clinic scenarios were scripted to simulate real-world encounters and recorded for standardization. Two artificial intelligence engines were given the audio files to transcribe the dialogue and produce medical documentation of the encounters. Similarity of the dialogue and LLM transcription was assessed using an online comparability tool. A panel of practicing retina specialists evaluated each generated medical note.
Main outcome measures: The number of discrepancies and overall similarity of LLM text compared to scripted patient-physician dialogues, and scoring on the physician documentation quality instrument-9 (PDQI-9) of each medical note by five retina specialists.
Results: On average, the documentation produced by AI engines scored 81.5% of total possible points in documentation quality. Similarity between pre-formed dialogue scripts and transcribed encounters was higher for ChatGPT (96.5%) compared to Gemini (90.6%, p<0.01). The mean total PDQI-9 score among all encounters from ChatGPT 3.5 (196.2/225, 87.2%) was significantly greater than Gemini 1.0 Pro (170.4/225, 75.7%, p=0.002).
Conclusion: The authors report the aptitude of two popular LLMs (ChatGPT 3.5 and Google Gemini 1.0 Pro) in generating medical notes based on audio recordings of scripted vitreoretinal clinical encounters using a validated medical documentation tool. Artificial intelligence can produce quality vitreoretinal clinic encounter medical notes after listening to patient-physician dialogues despite case complexity and missing encounter variables. The performance of these engines was satisfactory but sometimes included fabricated information. We demonstrate the potential utility of LLMs in reducing the documentation burden on physicians and potentially streamlining patient care.