{"title":"大型语言模型:释放甲状腺眼病患者教育的新潜力。","authors":"Yuwan Gao, Qi Xu, Ou Zhang, Hongliang Wang, Yunlong Wang, Jiale Wang, Xiaohui Chen","doi":"10.1007/s12020-025-04339-z","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>This study aims to evaluate the performance of three large language models (LLMs) in generating patient education materials (PEMs) for thyroid eye disease (TED), intending to improve patients' understanding and awareness of TED.</p><p><strong>Methods: </strong>We evaluated the performance of ChatGPT-4o, Claude 3.5, and Gemini 1.5 in generating PEMs for TED by designing different prompts. First, we produced TED patient educational brochures based on prompts A and B, respectively. Prompt B asked to make the content simple for sixth graders. Next, we designed two responses to frequently asked questions (FAQs) about TED: standard responses and simplified responses, where the simplified responses were optimized through specific prompts. All generated content was systematically evaluated based on dimensions such as quality, understandability, actionability, accuracy, and empathy. The readability of the content was analyzed using the online tool Readable.com (including FKGL: Flesch-Kincaid Grade Level and SMOG: Simple Measure of Gobbledygook).</p><p><strong>Results: </strong>Both prompt A and prompt B generated brochures that performed excellently in terms of quality (DISCERN ≥ 4), understandability (PEMAT Understandability ≥70%), accuracy (Score ≥4), and empathy (Score ≥4), with no significant differences between the two. However, both failed to meet the \"actionable\" standard (PEMAT Actionability <70%). Regarding readability, prompt B was easier to understand than prompt A, although the optimized version of prompt B still did not reach the ideal readability level. Additionally, a comparative analysis of FAQs about TED on Google using LLMs showed that, regardless of whether the response was standard or simplified, the LLM's performance outperformed Google, yielding results similar to those generated by the brochures.</p><p><strong>Conclusion: </strong>Overall, LLMs, as a powerful tool, demonstrate significant potential in generating PEMs for TED. They are capable of producing high-quality, understandable, accurate, and empathetic content, but there is still room for improvement in terms of readability.</p>","PeriodicalId":49211,"journal":{"name":"Endocrine","volume":" ","pages":""},"PeriodicalIF":2.9000,"publicationDate":"2025-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Large language models: unlocking new potential in patient education for thyroid eye disease.\",\"authors\":\"Yuwan Gao, Qi Xu, Ou Zhang, Hongliang Wang, Yunlong Wang, Jiale Wang, Xiaohui Chen\",\"doi\":\"10.1007/s12020-025-04339-z\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Purpose: </strong>This study aims to evaluate the performance of three large language models (LLMs) in generating patient education materials (PEMs) for thyroid eye disease (TED), intending to improve patients' understanding and awareness of TED.</p><p><strong>Methods: </strong>We evaluated the performance of ChatGPT-4o, Claude 3.5, and Gemini 1.5 in generating PEMs for TED by designing different prompts. First, we produced TED patient educational brochures based on prompts A and B, respectively. Prompt B asked to make the content simple for sixth graders. Next, we designed two responses to frequently asked questions (FAQs) about TED: standard responses and simplified responses, where the simplified responses were optimized through specific prompts. All generated content was systematically evaluated based on dimensions such as quality, understandability, actionability, accuracy, and empathy. The readability of the content was analyzed using the online tool Readable.com (including FKGL: Flesch-Kincaid Grade Level and SMOG: Simple Measure of Gobbledygook).</p><p><strong>Results: </strong>Both prompt A and prompt B generated brochures that performed excellently in terms of quality (DISCERN ≥ 4), understandability (PEMAT Understandability ≥70%), accuracy (Score ≥4), and empathy (Score ≥4), with no significant differences between the two. However, both failed to meet the \\\"actionable\\\" standard (PEMAT Actionability <70%). Regarding readability, prompt B was easier to understand than prompt A, although the optimized version of prompt B still did not reach the ideal readability level. Additionally, a comparative analysis of FAQs about TED on Google using LLMs showed that, regardless of whether the response was standard or simplified, the LLM's performance outperformed Google, yielding results similar to those generated by the brochures.</p><p><strong>Conclusion: </strong>Overall, LLMs, as a powerful tool, demonstrate significant potential in generating PEMs for TED. They are capable of producing high-quality, understandable, accurate, and empathetic content, but there is still room for improvement in terms of readability.</p>\",\"PeriodicalId\":49211,\"journal\":{\"name\":\"Endocrine\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":2.9000,\"publicationDate\":\"2025-07-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Endocrine\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1007/s12020-025-04339-z\",\"RegionNum\":3,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENDOCRINOLOGY & METABOLISM\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Endocrine","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1007/s12020-025-04339-z","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENDOCRINOLOGY & METABOLISM","Score":null,"Total":0}
Large language models: unlocking new potential in patient education for thyroid eye disease.
Purpose: This study aims to evaluate the performance of three large language models (LLMs) in generating patient education materials (PEMs) for thyroid eye disease (TED), intending to improve patients' understanding and awareness of TED.
Methods: We evaluated the performance of ChatGPT-4o, Claude 3.5, and Gemini 1.5 in generating PEMs for TED by designing different prompts. First, we produced TED patient educational brochures based on prompts A and B, respectively. Prompt B asked to make the content simple for sixth graders. Next, we designed two responses to frequently asked questions (FAQs) about TED: standard responses and simplified responses, where the simplified responses were optimized through specific prompts. All generated content was systematically evaluated based on dimensions such as quality, understandability, actionability, accuracy, and empathy. The readability of the content was analyzed using the online tool Readable.com (including FKGL: Flesch-Kincaid Grade Level and SMOG: Simple Measure of Gobbledygook).
Results: Both prompt A and prompt B generated brochures that performed excellently in terms of quality (DISCERN ≥ 4), understandability (PEMAT Understandability ≥70%), accuracy (Score ≥4), and empathy (Score ≥4), with no significant differences between the two. However, both failed to meet the "actionable" standard (PEMAT Actionability <70%). Regarding readability, prompt B was easier to understand than prompt A, although the optimized version of prompt B still did not reach the ideal readability level. Additionally, a comparative analysis of FAQs about TED on Google using LLMs showed that, regardless of whether the response was standard or simplified, the LLM's performance outperformed Google, yielding results similar to those generated by the brochures.
Conclusion: Overall, LLMs, as a powerful tool, demonstrate significant potential in generating PEMs for TED. They are capable of producing high-quality, understandable, accurate, and empathetic content, but there is still room for improvement in terms of readability.
期刊介绍:
Well-established as a major journal in today’s rapidly advancing experimental and clinical research areas, Endocrine publishes original articles devoted to basic (including molecular, cellular and physiological studies), translational and clinical research in all the different fields of endocrinology and metabolism. Articles will be accepted based on peer-reviews, priority, and editorial decision. Invited reviews, mini-reviews and viewpoints on relevant pathophysiological and clinical topics, as well as Editorials on articles appearing in the Journal, are published. Unsolicited Editorials will be evaluated by the editorial team. Outcomes of scientific meetings, as well as guidelines and position statements, may be submitted. The Journal also considers special feature articles in the field of endocrine genetics and epigenetics, as well as articles devoted to novel methods and techniques in endocrinology.
Endocrine covers controversial, clinical endocrine issues. Meta-analyses on endocrine and metabolic topics are also accepted. Descriptions of single clinical cases and/or small patients studies are not published unless of exceptional interest. However, reports of novel imaging studies and endocrine side effects in single patients may be considered. Research letters and letters to the editor related or unrelated to recently published articles can be submitted.
Endocrine covers leading topics in endocrinology such as neuroendocrinology, pituitary and hypothalamic peptides, thyroid physiological and clinical aspects, bone and mineral metabolism and osteoporosis, obesity, lipid and energy metabolism and food intake control, insulin, Type 1 and Type 2 diabetes, hormones of male and female reproduction, adrenal diseases pediatric and geriatric endocrinology, endocrine hypertension and endocrine oncology.