{"title":"提高人工智能生成的脆性骨折医疗信息的可读性:即时措辞在ChatGPT回复中的作用","authors":"Hakan Akkan, Gulce Kallem Seyyar","doi":"10.1007/s00198-024-07358-0","DOIUrl":null,"url":null,"abstract":"<p><p>Understanding how the questions used when interacting with chatbots impact the readability of the generated text is essential for effective health communication. Using descriptive queries instead of just keywords during interaction with ChatGPT results in more readable and understandable answers about fragility fractures.</p><p><strong>Purpose: </strong>Large language models like ChatGPT can enhance patients' understanding of medical information, making health decisions more accessible. Complex terms, such as \"fragility fracture,\" can confuse patients, so presenting its medical content in plain language is crucial. This study explored whether conversational prompts improve readability and understanding compared to keyword-based prompts when generating patient-centered health information on fragility fractures.</p><p><strong>Methods: </strong>The 32 most frequently searched keywords related to \"fragility fracture\" and \"osteoporotic fracture\" were identified using Google Trends. From this set, 24 keywords were selected based on relevance and entered sequentially into ChatGPT. Each keyword was tested with two prompt types: (1) plain language with keywords embedded and (2) keywords alone. The readability and comprehensibility of the AI-generated responses were assessed using the Flesch-Kincaid reading ease (FKRE) and Flesch-Kincaid grade level (FKGL), respectively. The scores of the responses were compared using the Mann-Whitney U test.</p><p><strong>Results: </strong>The FKRE scores indicated significantly higher readability with plain language prompts (median 34.35) compared to keyword-only prompts (median 23.60). Similarly, the FKGL indicated a lower grade level for plain language prompts (median 12.05) versus keyword-only (median 14.50), with both differences achieving statistical significance.</p><p><strong>Conclusion: </strong>Our findings suggest that using conversational prompts can enhance the readability of AI-generated medical information on fragility fractures. Clinicians and content creators should consider this approach when using AI for patient education to optimize comprehension.</p>","PeriodicalId":19638,"journal":{"name":"Osteoporosis International","volume":" ","pages":""},"PeriodicalIF":4.2000,"publicationDate":"2025-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Improving readability in AI-generated medical information on fragility fractures: the role of prompt wording on ChatGPT's responses.\",\"authors\":\"Hakan Akkan, Gulce Kallem Seyyar\",\"doi\":\"10.1007/s00198-024-07358-0\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Understanding how the questions used when interacting with chatbots impact the readability of the generated text is essential for effective health communication. Using descriptive queries instead of just keywords during interaction with ChatGPT results in more readable and understandable answers about fragility fractures.</p><p><strong>Purpose: </strong>Large language models like ChatGPT can enhance patients' understanding of medical information, making health decisions more accessible. Complex terms, such as \\\"fragility fracture,\\\" can confuse patients, so presenting its medical content in plain language is crucial. This study explored whether conversational prompts improve readability and understanding compared to keyword-based prompts when generating patient-centered health information on fragility fractures.</p><p><strong>Methods: </strong>The 32 most frequently searched keywords related to \\\"fragility fracture\\\" and \\\"osteoporotic fracture\\\" were identified using Google Trends. From this set, 24 keywords were selected based on relevance and entered sequentially into ChatGPT. Each keyword was tested with two prompt types: (1) plain language with keywords embedded and (2) keywords alone. The readability and comprehensibility of the AI-generated responses were assessed using the Flesch-Kincaid reading ease (FKRE) and Flesch-Kincaid grade level (FKGL), respectively. The scores of the responses were compared using the Mann-Whitney U test.</p><p><strong>Results: </strong>The FKRE scores indicated significantly higher readability with plain language prompts (median 34.35) compared to keyword-only prompts (median 23.60). Similarly, the FKGL indicated a lower grade level for plain language prompts (median 12.05) versus keyword-only (median 14.50), with both differences achieving statistical significance.</p><p><strong>Conclusion: </strong>Our findings suggest that using conversational prompts can enhance the readability of AI-generated medical information on fragility fractures. Clinicians and content creators should consider this approach when using AI for patient education to optimize comprehension.</p>\",\"PeriodicalId\":19638,\"journal\":{\"name\":\"Osteoporosis International\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":4.2000,\"publicationDate\":\"2025-01-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Osteoporosis International\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1007/s00198-024-07358-0\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENDOCRINOLOGY & METABOLISM\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Osteoporosis International","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1007/s00198-024-07358-0","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENDOCRINOLOGY & METABOLISM","Score":null,"Total":0}
Improving readability in AI-generated medical information on fragility fractures: the role of prompt wording on ChatGPT's responses.
Understanding how the questions used when interacting with chatbots impact the readability of the generated text is essential for effective health communication. Using descriptive queries instead of just keywords during interaction with ChatGPT results in more readable and understandable answers about fragility fractures.
Purpose: Large language models like ChatGPT can enhance patients' understanding of medical information, making health decisions more accessible. Complex terms, such as "fragility fracture," can confuse patients, so presenting its medical content in plain language is crucial. This study explored whether conversational prompts improve readability and understanding compared to keyword-based prompts when generating patient-centered health information on fragility fractures.
Methods: The 32 most frequently searched keywords related to "fragility fracture" and "osteoporotic fracture" were identified using Google Trends. From this set, 24 keywords were selected based on relevance and entered sequentially into ChatGPT. Each keyword was tested with two prompt types: (1) plain language with keywords embedded and (2) keywords alone. The readability and comprehensibility of the AI-generated responses were assessed using the Flesch-Kincaid reading ease (FKRE) and Flesch-Kincaid grade level (FKGL), respectively. The scores of the responses were compared using the Mann-Whitney U test.
Results: The FKRE scores indicated significantly higher readability with plain language prompts (median 34.35) compared to keyword-only prompts (median 23.60). Similarly, the FKGL indicated a lower grade level for plain language prompts (median 12.05) versus keyword-only (median 14.50), with both differences achieving statistical significance.
Conclusion: Our findings suggest that using conversational prompts can enhance the readability of AI-generated medical information on fragility fractures. Clinicians and content creators should consider this approach when using AI for patient education to optimize comprehension.
期刊介绍:
An international multi-disciplinary journal which is a joint initiative between the International Osteoporosis Foundation and the National Osteoporosis Foundation of the USA, Osteoporosis International provides a forum for the communication and exchange of current ideas concerning the diagnosis, prevention, treatment and management of osteoporosis and other metabolic bone diseases.
It publishes: original papers - reporting progress and results in all areas of osteoporosis and its related fields; review articles - reflecting the present state of knowledge in special areas of summarizing limited themes in which discussion has led to clearly defined conclusions; educational articles - giving information on the progress of a topic of particular interest; case reports - of uncommon or interesting presentations of the condition.
While focusing on clinical research, the Journal will also accept submissions on more basic aspects of research, where they are considered by the editors to be relevant to the human disease spectrum.