Pathology Made Simple: ChatGPT's Summarization of Pathology Reports.

IF 3.2
Gali Zabarsky Shasha, Nora Balint-Lahat, Ginette Schiby, Assaf Debby, Iris Barshack, Chen Mayer
{"title":"Pathology Made Simple: ChatGPT's Summarization of Pathology Reports.","authors":"Gali Zabarsky Shasha, Nora Balint-Lahat, Ginette Schiby, Assaf Debby, Iris Barshack, Chen Mayer","doi":"10.5858/arpa.2025-0072-OA","DOIUrl":null,"url":null,"abstract":"<p><strong>Context.—: </strong>Pathology reports are essential for guiding clinical decisions but are often complex and lengthy. Artificial intelligence tools like ChatGPT may offer a way to distill these reports into clear, concise summaries to improve communication and efficiency in clinical settings.</p><p><strong>Objective.—: </strong>To evaluate the performance of ChatGPT-4o in summarizing detailed pathology reports into 1-sentence diagnoses that retain critical clinical information and are accessible to medical professionals.</p><p><strong>Design.—: </strong>We retrospectively analyzed 120 anonymized pathology reports from 2022-2023, focusing on 40 complex cases from 3 subspecialties: breast pathology, melanocytic lesions, and lymphomas. Using a standardized brief prompt, ChatGPT-4o generated 1-sentence summaries for each report. Two independent pathologists assessed each summary for inclusion of essential information, exclusion of irrelevant details, presence of critical errors, and overall readability.</p><p><strong>Results.—: </strong>The mean scores for inclusion of essential information were 8.09 (melanocytic lesions), 8.15 (breast cancers), and 9.55 (lymphomas). Critical error-free rates were 62.5%, 77.5%, and 95%, respectively. Exclusion of nonessential information scored consistently high across subspecialties, and readability was rated 10/10 in 119 of 120 cases.</p><p><strong>Conclusions.—: </strong>ChatGPT-4o, when used with a standardized prompt and expert oversight, shows promising ability to generate concise and readable summaries of pathology reports. While overall performance was strong, occasional errors and limitations in handling complex or multipart cases were noted. Further refinement and domain-specific model training may enhance the reliability and clinical utility of artificial intelligence-assisted reporting.</p>","PeriodicalId":93883,"journal":{"name":"Archives of pathology & laboratory medicine","volume":" ","pages":""},"PeriodicalIF":3.2000,"publicationDate":"2025-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Archives of pathology & laboratory medicine","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5858/arpa.2025-0072-OA","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Context.—: Pathology reports are essential for guiding clinical decisions but are often complex and lengthy. Artificial intelligence tools like ChatGPT may offer a way to distill these reports into clear, concise summaries to improve communication and efficiency in clinical settings.

Objective.—: To evaluate the performance of ChatGPT-4o in summarizing detailed pathology reports into 1-sentence diagnoses that retain critical clinical information and are accessible to medical professionals.

Design.—: We retrospectively analyzed 120 anonymized pathology reports from 2022-2023, focusing on 40 complex cases from 3 subspecialties: breast pathology, melanocytic lesions, and lymphomas. Using a standardized brief prompt, ChatGPT-4o generated 1-sentence summaries for each report. Two independent pathologists assessed each summary for inclusion of essential information, exclusion of irrelevant details, presence of critical errors, and overall readability.

Results.—: The mean scores for inclusion of essential information were 8.09 (melanocytic lesions), 8.15 (breast cancers), and 9.55 (lymphomas). Critical error-free rates were 62.5%, 77.5%, and 95%, respectively. Exclusion of nonessential information scored consistently high across subspecialties, and readability was rated 10/10 in 119 of 120 cases.

Conclusions.—: ChatGPT-4o, when used with a standardized prompt and expert oversight, shows promising ability to generate concise and readable summaries of pathology reports. While overall performance was strong, occasional errors and limitations in handling complex or multipart cases were noted. Further refinement and domain-specific model training may enhance the reliability and clinical utility of artificial intelligence-assisted reporting.

病理学变得简单:ChatGPT对病理学报告的总结。
上下文。病理报告对指导临床决策至关重要,但往往复杂而冗长。ChatGPT等人工智能工具可能提供一种方法,将这些报告提炼成清晰、简洁的摘要,以改善临床环境中的沟通和效率。-:评估chatgpt - 40在将详细的病理报告总结为1句诊断的性能,这些诊断保留了关键的临床信息,并供医疗专业人员使用。我们回顾性分析了2022-2023年的120例匿名病理报告,重点分析了来自3个亚专科的40例复杂病例:乳腺病理、黑素细胞病变和淋巴瘤。使用标准化的简短提示,chatgpt - 40为每个报告生成1句摘要。两名独立的病理学家评估每个摘要,包括基本信息,排除不相关的细节,关键错误的存在,以及整体的可读性。-:包含基本信息的平均得分为8.09(黑素细胞病变),8.15(乳腺癌)和9.55(淋巴瘤)。临界无错误率分别为62.5%、77.5%和95%。排除非必要信息在各亚专科的得分一直很高,120例病例中有119例的可读性评分为10/10。-: chatgpt - 40在标准化提示和专家监督下使用时,显示出生成简明易读的病理报告摘要的良好能力。虽然总体表现良好,但注意到在处理复杂或多部分案件时偶尔出现错误和限制。进一步细化和特定领域的模型训练可以提高人工智能辅助报告的可靠性和临床实用性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信