Ariana L Shaari, Rebecca A Ho, Annie Xu, Disha Patil, Lorik Berisha, Wayne D Hsueh
{"title":"利用大型语言模型增强鼻科学患者教育资源。","authors":"Ariana L Shaari, Rebecca A Ho, Annie Xu, Disha Patil, Lorik Berisha, Wayne D Hsueh","doi":"10.1177/00034894251342969","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>To compare the readability of patient education materials (PEMs) on rhinologic conditions and procedures from the American Rhinologic Society (ARS) with those generated by large language models (LLMs).</p><p><strong>Methods: </strong>Forty-one PEMs from the ARS were retrieved. Readability was assessed through the Flesch Kincaid Reading Ease (FKRE) and Flesch Kincaid Grade Level (FKGL), in which higher FKRE and lower FKGL scores indicate better readability. Three LLMs-ChatGPT 4.o, Google Gemini, and Microsoft Copilot-were then used to translate each ARS PEM to the recommended sixth-grade reading level. Readability scores were calculated and compared for each translated PEM.</p><p><strong>Results: </strong>A total of 164 PEMs were evaluated, including 123 generated by LLMs. The original ARS PEMs had a mean FKGL of 10.28, while AI-generated PEMs demonstrated significantly better readability, with a mean FKGL of 8.6 (<i>P</i> < .0001). Among the AI platforms, Gemini was the most easily readable, reaching a mean FKGL of 7.5 and FKRE of 65.5.</p><p><strong>Conclusion: </strong>LLMs improved the readability of PEMs, potentially enhancing accessibility to medical information for diverse populations. Despite these findings, healthcare providers and patients should cautiously appraise LLM-generated content, particularly for rhinology conditions and procedures.</p><p><strong>Level of evidence: </strong>N/A.</p>","PeriodicalId":520787,"journal":{"name":"The Annals of otology, rhinology, and laryngology","volume":" ","pages":"34894251342969"},"PeriodicalIF":0.0000,"publicationDate":"2025-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Leveraging Large Language Models to Enhance Patient Educational Resources in Rhinology.\",\"authors\":\"Ariana L Shaari, Rebecca A Ho, Annie Xu, Disha Patil, Lorik Berisha, Wayne D Hsueh\",\"doi\":\"10.1177/00034894251342969\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>To compare the readability of patient education materials (PEMs) on rhinologic conditions and procedures from the American Rhinologic Society (ARS) with those generated by large language models (LLMs).</p><p><strong>Methods: </strong>Forty-one PEMs from the ARS were retrieved. Readability was assessed through the Flesch Kincaid Reading Ease (FKRE) and Flesch Kincaid Grade Level (FKGL), in which higher FKRE and lower FKGL scores indicate better readability. Three LLMs-ChatGPT 4.o, Google Gemini, and Microsoft Copilot-were then used to translate each ARS PEM to the recommended sixth-grade reading level. Readability scores were calculated and compared for each translated PEM.</p><p><strong>Results: </strong>A total of 164 PEMs were evaluated, including 123 generated by LLMs. The original ARS PEMs had a mean FKGL of 10.28, while AI-generated PEMs demonstrated significantly better readability, with a mean FKGL of 8.6 (<i>P</i> < .0001). Among the AI platforms, Gemini was the most easily readable, reaching a mean FKGL of 7.5 and FKRE of 65.5.</p><p><strong>Conclusion: </strong>LLMs improved the readability of PEMs, potentially enhancing accessibility to medical information for diverse populations. Despite these findings, healthcare providers and patients should cautiously appraise LLM-generated content, particularly for rhinology conditions and procedures.</p><p><strong>Level of evidence: </strong>N/A.</p>\",\"PeriodicalId\":520787,\"journal\":{\"name\":\"The Annals of otology, rhinology, and laryngology\",\"volume\":\" \",\"pages\":\"34894251342969\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-05-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"The Annals of otology, rhinology, and laryngology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1177/00034894251342969\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"The Annals of otology, rhinology, and laryngology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1177/00034894251342969","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Leveraging Large Language Models to Enhance Patient Educational Resources in Rhinology.
Background: To compare the readability of patient education materials (PEMs) on rhinologic conditions and procedures from the American Rhinologic Society (ARS) with those generated by large language models (LLMs).
Methods: Forty-one PEMs from the ARS were retrieved. Readability was assessed through the Flesch Kincaid Reading Ease (FKRE) and Flesch Kincaid Grade Level (FKGL), in which higher FKRE and lower FKGL scores indicate better readability. Three LLMs-ChatGPT 4.o, Google Gemini, and Microsoft Copilot-were then used to translate each ARS PEM to the recommended sixth-grade reading level. Readability scores were calculated and compared for each translated PEM.
Results: A total of 164 PEMs were evaluated, including 123 generated by LLMs. The original ARS PEMs had a mean FKGL of 10.28, while AI-generated PEMs demonstrated significantly better readability, with a mean FKGL of 8.6 (P < .0001). Among the AI platforms, Gemini was the most easily readable, reaching a mean FKGL of 7.5 and FKRE of 65.5.
Conclusion: LLMs improved the readability of PEMs, potentially enhancing accessibility to medical information for diverse populations. Despite these findings, healthcare providers and patients should cautiously appraise LLM-generated content, particularly for rhinology conditions and procedures.