Drew Armstrong Pharm.D., Caroline Paul B.S., Brent McGlaughlin Pharm.D., David Hill Pharm.D.
{"title":"人工智能(AI)能教育患者吗?一项评估人工智能生成的患者教育材料的整体可读性和药剂师感知的研究","authors":"Drew Armstrong Pharm.D., Caroline Paul B.S., Brent McGlaughlin Pharm.D., David Hill Pharm.D.","doi":"10.1002/jac5.2006","DOIUrl":null,"url":null,"abstract":"<div>\n \n \n <section>\n \n <h3> Introduction</h3>\n \n <p>Pharmacists are critical in providing safe and accurate education to patients on disease states and medications. Artificial intelligence (AI) has the capacity to generate patient education materials at a rapid rate, potentially saving healthcare resources. However, overall accuracy and comfort with these materials by pharmacists need to be assessed.</p>\n </section>\n \n <section>\n \n <h3> Objective</h3>\n \n <p>The purpose of this study was to assess the accuracy, readability, and likelihood of using AI-generated patient education materials for ten common medications and disease states.</p>\n </section>\n \n <section>\n \n <h3> Method<b>s</b></h3>\n \n <p>AI (Chat Generative Pre-Trained Transformer [ChatGPT] v3.5) was used to create patient education materials for the following medications or disease states: apixaban, Continuous Glucose Monitoring (CGM), the Dietary Approaches to Stop Hypertension (DASH) Diet, enoxaparin, hypertension, hypoglycemia, myocardial infarction, naloxone, semaglutide, and warfarin. The following prompt, “Write a patient education material for…” with these medications or disease states being at the end of the prompt, was entered into the ChatGPT (OpenAI, San Francisco, CA) software. A similar prompt, “Write a patient education material for…at a 6th-grade reading level or lower” using the same medications and disease states, was then completed. Ten clinical pharmacists were asked to review and assess the time it took them to review each educational material, make clinical and grammatical edits, their confidence in the clinical accuracy of the materials, and the likelihood that they would use them with their patients. These education materials were assessed for readability using the Flesh-Kincaid readability score.</p>\n </section>\n \n <section>\n \n <h3> Results</h3>\n \n <p>A total of 8 pharmacists completed both sets of reviews for a total of 16 patient education materials assessed. There was no statistical difference in any pharmacist assessment completed between the two prompts. The overall confidence in accuracy was fair, and the overall readability score of the AI-generated materials decreased from 11.65 to 5.87 after reviewing the 6th-grade prompt (<i>p</i> < .001).</p>\n </section>\n \n <section>\n \n <h3> Conclusion</h3>\n \n <p>AI-generated patient education materials show promise in clinical practice, however further validation of their clinical accuracy continues to be a burden. It is important to ensure that overall readability for patient education materials is at an appropriate level to increase the likelihood of patient understanding.</p>\n </section>\n </div>","PeriodicalId":73966,"journal":{"name":"Journal of the American College of Clinical Pharmacy : JACCP","volume":null,"pages":null},"PeriodicalIF":1.3000,"publicationDate":"2024-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Can artificial intelligence (AI) educate your patient? A study to assess overall readability and pharmacists' perception of AI-generated patient education materials\",\"authors\":\"Drew Armstrong Pharm.D., Caroline Paul B.S., Brent McGlaughlin Pharm.D., David Hill Pharm.D.\",\"doi\":\"10.1002/jac5.2006\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n \\n \\n <section>\\n \\n <h3> Introduction</h3>\\n \\n <p>Pharmacists are critical in providing safe and accurate education to patients on disease states and medications. Artificial intelligence (AI) has the capacity to generate patient education materials at a rapid rate, potentially saving healthcare resources. However, overall accuracy and comfort with these materials by pharmacists need to be assessed.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Objective</h3>\\n \\n <p>The purpose of this study was to assess the accuracy, readability, and likelihood of using AI-generated patient education materials for ten common medications and disease states.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Method<b>s</b></h3>\\n \\n <p>AI (Chat Generative Pre-Trained Transformer [ChatGPT] v3.5) was used to create patient education materials for the following medications or disease states: apixaban, Continuous Glucose Monitoring (CGM), the Dietary Approaches to Stop Hypertension (DASH) Diet, enoxaparin, hypertension, hypoglycemia, myocardial infarction, naloxone, semaglutide, and warfarin. The following prompt, “Write a patient education material for…” with these medications or disease states being at the end of the prompt, was entered into the ChatGPT (OpenAI, San Francisco, CA) software. A similar prompt, “Write a patient education material for…at a 6th-grade reading level or lower” using the same medications and disease states, was then completed. Ten clinical pharmacists were asked to review and assess the time it took them to review each educational material, make clinical and grammatical edits, their confidence in the clinical accuracy of the materials, and the likelihood that they would use them with their patients. These education materials were assessed for readability using the Flesh-Kincaid readability score.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Results</h3>\\n \\n <p>A total of 8 pharmacists completed both sets of reviews for a total of 16 patient education materials assessed. There was no statistical difference in any pharmacist assessment completed between the two prompts. The overall confidence in accuracy was fair, and the overall readability score of the AI-generated materials decreased from 11.65 to 5.87 after reviewing the 6th-grade prompt (<i>p</i> < .001).</p>\\n </section>\\n \\n <section>\\n \\n <h3> Conclusion</h3>\\n \\n <p>AI-generated patient education materials show promise in clinical practice, however further validation of their clinical accuracy continues to be a burden. It is important to ensure that overall readability for patient education materials is at an appropriate level to increase the likelihood of patient understanding.</p>\\n </section>\\n </div>\",\"PeriodicalId\":73966,\"journal\":{\"name\":\"Journal of the American College of Clinical Pharmacy : JACCP\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.3000,\"publicationDate\":\"2024-06-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of the American College of Clinical Pharmacy : JACCP\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/jac5.2006\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"PHARMACOLOGY & PHARMACY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of the American College of Clinical Pharmacy : JACCP","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/jac5.2006","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"PHARMACOLOGY & PHARMACY","Score":null,"Total":0}
Can artificial intelligence (AI) educate your patient? A study to assess overall readability and pharmacists' perception of AI-generated patient education materials
Introduction
Pharmacists are critical in providing safe and accurate education to patients on disease states and medications. Artificial intelligence (AI) has the capacity to generate patient education materials at a rapid rate, potentially saving healthcare resources. However, overall accuracy and comfort with these materials by pharmacists need to be assessed.
Objective
The purpose of this study was to assess the accuracy, readability, and likelihood of using AI-generated patient education materials for ten common medications and disease states.
Methods
AI (Chat Generative Pre-Trained Transformer [ChatGPT] v3.5) was used to create patient education materials for the following medications or disease states: apixaban, Continuous Glucose Monitoring (CGM), the Dietary Approaches to Stop Hypertension (DASH) Diet, enoxaparin, hypertension, hypoglycemia, myocardial infarction, naloxone, semaglutide, and warfarin. The following prompt, “Write a patient education material for…” with these medications or disease states being at the end of the prompt, was entered into the ChatGPT (OpenAI, San Francisco, CA) software. A similar prompt, “Write a patient education material for…at a 6th-grade reading level or lower” using the same medications and disease states, was then completed. Ten clinical pharmacists were asked to review and assess the time it took them to review each educational material, make clinical and grammatical edits, their confidence in the clinical accuracy of the materials, and the likelihood that they would use them with their patients. These education materials were assessed for readability using the Flesh-Kincaid readability score.
Results
A total of 8 pharmacists completed both sets of reviews for a total of 16 patient education materials assessed. There was no statistical difference in any pharmacist assessment completed between the two prompts. The overall confidence in accuracy was fair, and the overall readability score of the AI-generated materials decreased from 11.65 to 5.87 after reviewing the 6th-grade prompt (p < .001).
Conclusion
AI-generated patient education materials show promise in clinical practice, however further validation of their clinical accuracy continues to be a burden. It is important to ensure that overall readability for patient education materials is at an appropriate level to increase the likelihood of patient understanding.