Erin S Proctor, David J Nusbaum, John M Lee, Robert C Benirschke, Alexa Freedman, Gregory Raster, Alexander P Glaser, Craig V Labbate, Andrew M Higgins, Brian T Helfand, Eric F Glassy, Lija Joseph, Robert A Edelstein, Elizabeth A Krupinski, Hussein Alnajar, James T Kearns, John V Groth
{"title":"Bridging the gap: Evaluating ChatGPT-generated, personalized, patient-centered prostate biopsy reports.","authors":"Erin S Proctor, David J Nusbaum, John M Lee, Robert C Benirschke, Alexa Freedman, Gregory Raster, Alexander P Glaser, Craig V Labbate, Andrew M Higgins, Brian T Helfand, Eric F Glassy, Lija Joseph, Robert A Edelstein, Elizabeth A Krupinski, Hussein Alnajar, James T Kearns, John V Groth","doi":"10.1093/ajcp/aqae185","DOIUrl":null,"url":null,"abstract":"<p><strong>Objective: </strong>The highly specialized language used in prostate biopsy pathology reports coupled with low rates of health literacy leave some patients unable to comprehend their medical information. Patients' use of online search engines can lead to misinterpretation of results and emotional distress. Artificial intelligence (AI) tools such as ChatGPT (OpenAI) could simplify complex texts and help patients. This study evaluates patient-centered prostate biopsy reports generated by ChatGPT.</p><p><strong>Methods: </strong>Thirty-five self-generated prostate biopsy reports were synthesized using National Comprehensive Cancer Network guidelines. Each report was entered into ChatGPT, version 4, with the same instructions, and the explanations were evaluated by 5 urologists and 5 pathologists.</p><p><strong>Results: </strong>Respondents rated the AI-generated reports as mostly accurate and complete. All but 1 report was rated complete and grammatically correct by the majority of physicians. Pathologists did not rate any reports as having severe potential for harm, but 1 or more urologists rated severe concern in 20% of the reports. For 80% of the reports, all 5 pathologists felt comfortable sharing them with a patient or another clinician, but all 5 urologists reached the same consensus for only 40% of reports. Although every report required edits, all physicians agreed that they could modify the ChatGPT report faster than they could write an original report.</p><p><strong>Conclusions: </strong>ChatGPT can save physicians substantial time by generating patient-centered reports appropriate for patient and physician audiences with low potential to cause harm. Surveyed physicians have confidence in the overall utility of ChatGPT, supporting further investigation of how AI could be integrated into physicians' workflows.</p>","PeriodicalId":7506,"journal":{"name":"American journal of clinical pathology","volume":" ","pages":""},"PeriodicalIF":2.3000,"publicationDate":"2025-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"American journal of clinical pathology","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1093/ajcp/aqae185","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"PATHOLOGY","Score":null,"Total":0}
引用次数: 0
Abstract
Objective: The highly specialized language used in prostate biopsy pathology reports coupled with low rates of health literacy leave some patients unable to comprehend their medical information. Patients' use of online search engines can lead to misinterpretation of results and emotional distress. Artificial intelligence (AI) tools such as ChatGPT (OpenAI) could simplify complex texts and help patients. This study evaluates patient-centered prostate biopsy reports generated by ChatGPT.
Methods: Thirty-five self-generated prostate biopsy reports were synthesized using National Comprehensive Cancer Network guidelines. Each report was entered into ChatGPT, version 4, with the same instructions, and the explanations were evaluated by 5 urologists and 5 pathologists.
Results: Respondents rated the AI-generated reports as mostly accurate and complete. All but 1 report was rated complete and grammatically correct by the majority of physicians. Pathologists did not rate any reports as having severe potential for harm, but 1 or more urologists rated severe concern in 20% of the reports. For 80% of the reports, all 5 pathologists felt comfortable sharing them with a patient or another clinician, but all 5 urologists reached the same consensus for only 40% of reports. Although every report required edits, all physicians agreed that they could modify the ChatGPT report faster than they could write an original report.
Conclusions: ChatGPT can save physicians substantial time by generating patient-centered reports appropriate for patient and physician audiences with low potential to cause harm. Surveyed physicians have confidence in the overall utility of ChatGPT, supporting further investigation of how AI could be integrated into physicians' workflows.
期刊介绍:
The American Journal of Clinical Pathology (AJCP) is the official journal of the American Society for Clinical Pathology and the Academy of Clinical Laboratory Physicians and Scientists. It is a leading international journal for publication of articles concerning novel anatomic pathology and laboratory medicine observations on human disease. AJCP emphasizes articles that focus on the application of evolving technologies for the diagnosis and characterization of diseases and conditions, as well as those that have a direct link toward improving patient care.