Hana L Haver, Anuj K Gupta, Emily B Ambinder, Manisha Bahl, Eniola T Oluyemi, Jean Jeudy, Paul H Yi
{"title":"评估使用 ChatGPT 准确简化以患者为中心的乳腺癌预防和筛查信息。","authors":"Hana L Haver, Anuj K Gupta, Emily B Ambinder, Manisha Bahl, Eniola T Oluyemi, Jean Jeudy, Paul H Yi","doi":"10.1148/rycan.230086","DOIUrl":null,"url":null,"abstract":"<p><p>Purpose To evaluate the use of ChatGPT as a tool to simplify answers to common questions about breast cancer prevention and screening. Materials and Methods In this retrospective, exploratory study, ChatGPT was requested to simplify responses to 25 questions about breast cancer to a sixth-grade reading level in March and August 2023. Simplified responses were evaluated for clinical appropriateness. All original and simplified responses were assessed for reading ease on the Flesch Reading Ease Index and for readability on five scales: Flesch-Kincaid Grade Level, Gunning Fog Index, Coleman-Liau Index, Automated Readability Index, and the Simple Measure of Gobbledygook (ie, SMOG) Index. Mean reading ease, readability, and word count were compared between original and simplified responses using paired <i>t</i> tests. McNemar test was used to compare the proportion of responses with adequate reading ease (score of 60 or greater) and readability (sixth-grade level). Results ChatGPT improved mean reading ease (original responses, 46 vs simplified responses, 70; <i>P</i> < .001) and readability (original, grade 13 vs simplified, grade 8.9; <i>P</i> < .001) and decreased word count (original, 193 vs simplified, 173; <i>P</i> < .001). Ninety-two percent (23 of 25) of simplified responses were considered clinically appropriate. All 25 (100%) simplified responses met criteria for adequate reading ease, compared with only two of 25 original responses (<i>P</i> < .001). Two of the 25 simplified responses (8%) met criteria for adequate readability. Conclusion ChatGPT simplified answers to common breast cancer screening and prevention questions by improving the readability by four grade levels, though the potential to produce incorrect information necessitates physician oversight when using this tool. <b>Keywords:</b> Mammography, Screening, Informatics, Breast, Education, Health Policy and Practice, Oncology, Technology Assessment <i>Supplemental material is available for this article.</i> © RSNA, 2023.</p>","PeriodicalId":20786,"journal":{"name":"Radiology. Imaging cancer","volume":null,"pages":null},"PeriodicalIF":5.6000,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10988327/pdf/","citationCount":"0","resultStr":"{\"title\":\"Evaluating the Use of ChatGPT to Accurately Simplify Patient-centered Information about Breast Cancer Prevention and Screening.\",\"authors\":\"Hana L Haver, Anuj K Gupta, Emily B Ambinder, Manisha Bahl, Eniola T Oluyemi, Jean Jeudy, Paul H Yi\",\"doi\":\"10.1148/rycan.230086\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Purpose To evaluate the use of ChatGPT as a tool to simplify answers to common questions about breast cancer prevention and screening. Materials and Methods In this retrospective, exploratory study, ChatGPT was requested to simplify responses to 25 questions about breast cancer to a sixth-grade reading level in March and August 2023. Simplified responses were evaluated for clinical appropriateness. All original and simplified responses were assessed for reading ease on the Flesch Reading Ease Index and for readability on five scales: Flesch-Kincaid Grade Level, Gunning Fog Index, Coleman-Liau Index, Automated Readability Index, and the Simple Measure of Gobbledygook (ie, SMOG) Index. Mean reading ease, readability, and word count were compared between original and simplified responses using paired <i>t</i> tests. McNemar test was used to compare the proportion of responses with adequate reading ease (score of 60 or greater) and readability (sixth-grade level). Results ChatGPT improved mean reading ease (original responses, 46 vs simplified responses, 70; <i>P</i> < .001) and readability (original, grade 13 vs simplified, grade 8.9; <i>P</i> < .001) and decreased word count (original, 193 vs simplified, 173; <i>P</i> < .001). Ninety-two percent (23 of 25) of simplified responses were considered clinically appropriate. All 25 (100%) simplified responses met criteria for adequate reading ease, compared with only two of 25 original responses (<i>P</i> < .001). Two of the 25 simplified responses (8%) met criteria for adequate readability. Conclusion ChatGPT simplified answers to common breast cancer screening and prevention questions by improving the readability by four grade levels, though the potential to produce incorrect information necessitates physician oversight when using this tool. <b>Keywords:</b> Mammography, Screening, Informatics, Breast, Education, Health Policy and Practice, Oncology, Technology Assessment <i>Supplemental material is available for this article.</i> © RSNA, 2023.</p>\",\"PeriodicalId\":20786,\"journal\":{\"name\":\"Radiology. Imaging cancer\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":5.6000,\"publicationDate\":\"2024-03-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10988327/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Radiology. Imaging cancer\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1148/rycan.230086\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ONCOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Radiology. Imaging cancer","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1148/rycan.230086","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ONCOLOGY","Score":null,"Total":0}
引用次数: 0