Mengxuan Sun, Ehud Reiter, Anne E Kiltie, George Ramsay, Lisa Duncan, Peter Murchie, Rosalind Adam
{"title":"ChatGPT 向患者解释复杂医疗报告的有效性","authors":"Mengxuan Sun, Ehud Reiter, Anne E Kiltie, George Ramsay, Lisa Duncan, Peter Murchie, Rosalind Adam","doi":"arxiv-2406.15963","DOIUrl":null,"url":null,"abstract":"Electronic health records contain detailed information about the medical\ncondition of patients, but they are difficult for patients to understand even\nif they have access to them. We explore whether ChatGPT (GPT 4) can help\nexplain multidisciplinary team (MDT) reports to colorectal and prostate cancer\npatients. These reports are written in dense medical language and assume\nclinical knowledge, so they are a good test of the ability of ChatGPT to\nexplain complex medical reports to patients. We asked clinicians and lay people\n(not patients) to review explanations and responses of ChatGPT. We also ran\nthree focus groups (including cancer patients, caregivers, computer scientists,\nand clinicians) to discuss output of ChatGPT. Our studies highlighted issues\nwith inaccurate information, inappropriate language, limited personalization,\nAI distrust, and challenges integrating large language models (LLMs) into\nclinical workflow. These issues will need to be resolved before LLMs can be\nused to explain complex personal medical information to patients.","PeriodicalId":501219,"journal":{"name":"arXiv - QuanBio - Other Quantitative Biology","volume":"3 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Effectiveness of ChatGPT in explaining complex medical reports to patients\",\"authors\":\"Mengxuan Sun, Ehud Reiter, Anne E Kiltie, George Ramsay, Lisa Duncan, Peter Murchie, Rosalind Adam\",\"doi\":\"arxiv-2406.15963\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Electronic health records contain detailed information about the medical\\ncondition of patients, but they are difficult for patients to understand even\\nif they have access to them. We explore whether ChatGPT (GPT 4) can help\\nexplain multidisciplinary team (MDT) reports to colorectal and prostate cancer\\npatients. These reports are written in dense medical language and assume\\nclinical knowledge, so they are a good test of the ability of ChatGPT to\\nexplain complex medical reports to patients. We asked clinicians and lay people\\n(not patients) to review explanations and responses of ChatGPT. We also ran\\nthree focus groups (including cancer patients, caregivers, computer scientists,\\nand clinicians) to discuss output of ChatGPT. Our studies highlighted issues\\nwith inaccurate information, inappropriate language, limited personalization,\\nAI distrust, and challenges integrating large language models (LLMs) into\\nclinical workflow. These issues will need to be resolved before LLMs can be\\nused to explain complex personal medical information to patients.\",\"PeriodicalId\":501219,\"journal\":{\"name\":\"arXiv - QuanBio - Other Quantitative Biology\",\"volume\":\"3 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-06-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - QuanBio - Other Quantitative Biology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2406.15963\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - QuanBio - Other Quantitative Biology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2406.15963","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Effectiveness of ChatGPT in explaining complex medical reports to patients
Electronic health records contain detailed information about the medical
condition of patients, but they are difficult for patients to understand even
if they have access to them. We explore whether ChatGPT (GPT 4) can help
explain multidisciplinary team (MDT) reports to colorectal and prostate cancer
patients. These reports are written in dense medical language and assume
clinical knowledge, so they are a good test of the ability of ChatGPT to
explain complex medical reports to patients. We asked clinicians and lay people
(not patients) to review explanations and responses of ChatGPT. We also ran
three focus groups (including cancer patients, caregivers, computer scientists,
and clinicians) to discuss output of ChatGPT. Our studies highlighted issues
with inaccurate information, inappropriate language, limited personalization,
AI distrust, and challenges integrating large language models (LLMs) into
clinical workflow. These issues will need to be resolved before LLMs can be
used to explain complex personal medical information to patients.