{"title":"病理学变得简单:ChatGPT对病理学报告的总结。","authors":"Gali Zabarsky Shasha, Nora Balint-Lahat, Ginette Schiby, Assaf Debby, Iris Barshack, Chen Mayer","doi":"10.5858/arpa.2025-0072-OA","DOIUrl":null,"url":null,"abstract":"<p><strong>Context.—: </strong>Pathology reports are essential for guiding clinical decisions but are often complex and lengthy. Artificial intelligence tools like ChatGPT may offer a way to distill these reports into clear, concise summaries to improve communication and efficiency in clinical settings.</p><p><strong>Objective.—: </strong>To evaluate the performance of ChatGPT-4o in summarizing detailed pathology reports into 1-sentence diagnoses that retain critical clinical information and are accessible to medical professionals.</p><p><strong>Design.—: </strong>We retrospectively analyzed 120 anonymized pathology reports from 2022-2023, focusing on 40 complex cases from 3 subspecialties: breast pathology, melanocytic lesions, and lymphomas. Using a standardized brief prompt, ChatGPT-4o generated 1-sentence summaries for each report. Two independent pathologists assessed each summary for inclusion of essential information, exclusion of irrelevant details, presence of critical errors, and overall readability.</p><p><strong>Results.—: </strong>The mean scores for inclusion of essential information were 8.09 (melanocytic lesions), 8.15 (breast cancers), and 9.55 (lymphomas). Critical error-free rates were 62.5%, 77.5%, and 95%, respectively. Exclusion of nonessential information scored consistently high across subspecialties, and readability was rated 10/10 in 119 of 120 cases.</p><p><strong>Conclusions.—: </strong>ChatGPT-4o, when used with a standardized prompt and expert oversight, shows promising ability to generate concise and readable summaries of pathology reports. While overall performance was strong, occasional errors and limitations in handling complex or multipart cases were noted. Further refinement and domain-specific model training may enhance the reliability and clinical utility of artificial intelligence-assisted reporting.</p>","PeriodicalId":93883,"journal":{"name":"Archives of pathology & laboratory medicine","volume":" ","pages":""},"PeriodicalIF":3.2000,"publicationDate":"2025-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Pathology Made Simple: ChatGPT's Summarization of Pathology Reports.\",\"authors\":\"Gali Zabarsky Shasha, Nora Balint-Lahat, Ginette Schiby, Assaf Debby, Iris Barshack, Chen Mayer\",\"doi\":\"10.5858/arpa.2025-0072-OA\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Context.—: </strong>Pathology reports are essential for guiding clinical decisions but are often complex and lengthy. Artificial intelligence tools like ChatGPT may offer a way to distill these reports into clear, concise summaries to improve communication and efficiency in clinical settings.</p><p><strong>Objective.—: </strong>To evaluate the performance of ChatGPT-4o in summarizing detailed pathology reports into 1-sentence diagnoses that retain critical clinical information and are accessible to medical professionals.</p><p><strong>Design.—: </strong>We retrospectively analyzed 120 anonymized pathology reports from 2022-2023, focusing on 40 complex cases from 3 subspecialties: breast pathology, melanocytic lesions, and lymphomas. Using a standardized brief prompt, ChatGPT-4o generated 1-sentence summaries for each report. Two independent pathologists assessed each summary for inclusion of essential information, exclusion of irrelevant details, presence of critical errors, and overall readability.</p><p><strong>Results.—: </strong>The mean scores for inclusion of essential information were 8.09 (melanocytic lesions), 8.15 (breast cancers), and 9.55 (lymphomas). Critical error-free rates were 62.5%, 77.5%, and 95%, respectively. Exclusion of nonessential information scored consistently high across subspecialties, and readability was rated 10/10 in 119 of 120 cases.</p><p><strong>Conclusions.—: </strong>ChatGPT-4o, when used with a standardized prompt and expert oversight, shows promising ability to generate concise and readable summaries of pathology reports. While overall performance was strong, occasional errors and limitations in handling complex or multipart cases were noted. Further refinement and domain-specific model training may enhance the reliability and clinical utility of artificial intelligence-assisted reporting.</p>\",\"PeriodicalId\":93883,\"journal\":{\"name\":\"Archives of pathology & laboratory medicine\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":3.2000,\"publicationDate\":\"2025-08-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Archives of pathology & laboratory medicine\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.5858/arpa.2025-0072-OA\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Archives of pathology & laboratory medicine","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5858/arpa.2025-0072-OA","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Pathology Made Simple: ChatGPT's Summarization of Pathology Reports.
Context.—: Pathology reports are essential for guiding clinical decisions but are often complex and lengthy. Artificial intelligence tools like ChatGPT may offer a way to distill these reports into clear, concise summaries to improve communication and efficiency in clinical settings.
Objective.—: To evaluate the performance of ChatGPT-4o in summarizing detailed pathology reports into 1-sentence diagnoses that retain critical clinical information and are accessible to medical professionals.
Design.—: We retrospectively analyzed 120 anonymized pathology reports from 2022-2023, focusing on 40 complex cases from 3 subspecialties: breast pathology, melanocytic lesions, and lymphomas. Using a standardized brief prompt, ChatGPT-4o generated 1-sentence summaries for each report. Two independent pathologists assessed each summary for inclusion of essential information, exclusion of irrelevant details, presence of critical errors, and overall readability.
Results.—: The mean scores for inclusion of essential information were 8.09 (melanocytic lesions), 8.15 (breast cancers), and 9.55 (lymphomas). Critical error-free rates were 62.5%, 77.5%, and 95%, respectively. Exclusion of nonessential information scored consistently high across subspecialties, and readability was rated 10/10 in 119 of 120 cases.
Conclusions.—: ChatGPT-4o, when used with a standardized prompt and expert oversight, shows promising ability to generate concise and readable summaries of pathology reports. While overall performance was strong, occasional errors and limitations in handling complex or multipart cases were noted. Further refinement and domain-specific model training may enhance the reliability and clinical utility of artificial intelligence-assisted reporting.