Enhancing doctor-patient communication using large language models for pathology report interpretation.

IF 3.3 3区 医学 Q2 MEDICAL INFORMATICS
Xiongwen Yang, Yi Xiao, Di Liu, Yun Zhang, Huiyin Deng, Jian Huang, Huiyou Shi, Dan Liu, Maoli Liang, Xing Jin, Yongpan Sun, Jing Yao, XiaoJiang Zhou, Wankai Guo, Yang He, WeiJuan Tang, Chuan Xu
{"title":"Enhancing doctor-patient communication using large language models for pathology report interpretation.","authors":"Xiongwen Yang, Yi Xiao, Di Liu, Yun Zhang, Huiyin Deng, Jian Huang, Huiyou Shi, Dan Liu, Maoli Liang, Xing Jin, Yongpan Sun, Jing Yao, XiaoJiang Zhou, Wankai Guo, Yang He, WeiJuan Tang, Chuan Xu","doi":"10.1186/s12911-024-02838-z","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Large language models (LLMs) are increasingly utilized in healthcare settings. Postoperative pathology reports, which are essential for diagnosing and determining treatment strategies for surgical patients, frequently include complex data that can be challenging for patients to comprehend. This complexity can adversely affect the quality of communication between doctors and patients about their diagnosis and treatment options, potentially impacting patient outcomes such as understanding of their condition, treatment adherence, and overall satisfaction.</p><p><strong>Materials and methods: </strong>This study analyzed text pathology reports from four hospitals between October and December 2023, focusing on malignant tumors. Using GPT-4, we developed templates for interpretive pathology reports (IPRs) to simplify medical terminology for non-professionals. We randomly selected 70 reports to generate these templates and evaluated the remaining 628 reports for consistency and readability. Patient understanding was measured using a custom-designed pathology report understanding level assessment scale, scored by volunteers with no medical background. The study also recorded doctor-patient communication time and patient comprehension levels before and after using IPRs.</p><p><strong>Results: </strong>Among 698 pathology reports analyzed, the interpretation through LLMs significantly improved readability and patient understanding. The average communication time between doctors and patients decreased by over 70%, from 35 to 10 min (P < 0.001), with the use of IPRs. The study also found that patients scored higher on understanding levels when provided with AI-generated reports, from 5.23 points to 7.98 points (P < 0.001), with the use of IPRs. indicating an effective translation of complex medical information. Consistency between original pathology reports (OPRs) and IPRs was also evaluated, with results showing high levels of consistency across all assessed dimensions, achieving an average score of 4.95 out of 5.</p><p><strong>Conclusion: </strong>This research demonstrates the efficacy of LLMs like GPT-4 in enhancing doctor-patient communication by translating pathology reports into more accessible language. While this study did not directly measure patient outcomes or satisfaction, it provides evidence that improved understanding and reduced communication time may positively influence patient engagement. These findings highlight the potential of AI to bridge gaps between medical professionals and the public in healthcare environments.</p>","PeriodicalId":9340,"journal":{"name":"BMC Medical Informatics and Decision Making","volume":"25 1","pages":"36"},"PeriodicalIF":3.3000,"publicationDate":"2025-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11756061/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"BMC Medical Informatics and Decision Making","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1186/s12911-024-02838-z","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MEDICAL INFORMATICS","Score":null,"Total":0}
引用次数: 0

Abstract

Background: Large language models (LLMs) are increasingly utilized in healthcare settings. Postoperative pathology reports, which are essential for diagnosing and determining treatment strategies for surgical patients, frequently include complex data that can be challenging for patients to comprehend. This complexity can adversely affect the quality of communication between doctors and patients about their diagnosis and treatment options, potentially impacting patient outcomes such as understanding of their condition, treatment adherence, and overall satisfaction.

Materials and methods: This study analyzed text pathology reports from four hospitals between October and December 2023, focusing on malignant tumors. Using GPT-4, we developed templates for interpretive pathology reports (IPRs) to simplify medical terminology for non-professionals. We randomly selected 70 reports to generate these templates and evaluated the remaining 628 reports for consistency and readability. Patient understanding was measured using a custom-designed pathology report understanding level assessment scale, scored by volunteers with no medical background. The study also recorded doctor-patient communication time and patient comprehension levels before and after using IPRs.

Results: Among 698 pathology reports analyzed, the interpretation through LLMs significantly improved readability and patient understanding. The average communication time between doctors and patients decreased by over 70%, from 35 to 10 min (P < 0.001), with the use of IPRs. The study also found that patients scored higher on understanding levels when provided with AI-generated reports, from 5.23 points to 7.98 points (P < 0.001), with the use of IPRs. indicating an effective translation of complex medical information. Consistency between original pathology reports (OPRs) and IPRs was also evaluated, with results showing high levels of consistency across all assessed dimensions, achieving an average score of 4.95 out of 5.

Conclusion: This research demonstrates the efficacy of LLMs like GPT-4 in enhancing doctor-patient communication by translating pathology reports into more accessible language. While this study did not directly measure patient outcomes or satisfaction, it provides evidence that improved understanding and reduced communication time may positively influence patient engagement. These findings highlight the potential of AI to bridge gaps between medical professionals and the public in healthcare environments.

求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
7.20
自引率
5.70%
发文量
297
审稿时长
1 months
期刊介绍: BMC Medical Informatics and Decision Making is an open access journal publishing original peer-reviewed research articles in relation to the design, development, implementation, use, and evaluation of health information technologies and decision-making for human health.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信