{"title":"Talking technology: exploring chatbots as a tool for cataract patient education.","authors":"I Brahim Edhem Yılmaz, Levent Doğan","doi":"10.1080/08164622.2023.2298812","DOIUrl":null,"url":null,"abstract":"<p><strong>Clinical relevance: </strong>Worldwide, millions suffer from cataracts, which impair vision and quality of life. Cataract education improves outcomes, satisfaction, and treatment adherence. Lack of health literacy, language and cultural barriers, personal preferences, and limited resources may all impede effective communication.</p><p><strong>Background: </strong>AI can improve patient education by providing personalised, interactive, and accessible information tailored to patient understanding, interest, and motivation. AI chatbots can have human-like conversations and give advice on numerous topics.</p><p><strong>Methods: </strong>This study investigated the efficacy of chatbots in cataract patient education relative to traditional resources like the AAO website, focusing on information accuracy,understandability, actionability, and readability. A descriptive comparative design was used to analyse quantitative data from frequently asked questions about cataracts answered by ChatGPT, Bard, Bing AI, and the AAO website. SOLO taxonomy, PEMAT, and the Flesch-Kincaid ease score were used to collect and analyse the data.</p><p><strong>Results: </strong>Chatbots scored higher than AAO website on cataract-related questions in terms of accuracy (mean SOLO score ChatGPT: 3.1 ± 0.31, Bard: 2.9 ± 0.72, Bing AI: 2.65 ± 0.49, AAO website: 2.4 ± 0.6, (<i>p</i> < 0.001)). For understandability (mean PEMAT-U score AAO website: 0,89 ± 0,04, ChatGPT 0,84 ± 0,02, Bard: 0,84 ± 0,02, Bing AI: 0,81 ± 0,02, (<i>p</i> < 0.001)), and actionability (mean PEMAT-A score ChatGPT: 0.86 ± 0.03, Bard: 0.85 ± 0.06, Bing AI: 0.81 ± 0.05, AAO website: 0.81 ± 0.06, (<i>p</i> < 0.001)) AAO website scored better than chatbots. Flesch-Kincaid readability ease analysis showed that Bard (55,5 ± 8,48) had the highest mean score, followed by AAO website (51,96 ± 12,46), Bing AI (41,77 ± 9,53), and ChatGPT (34,38 ± 9,75, (<i>p</i> < 0.001)).</p><p><strong>Conclusion: </strong>Chatbots have the potential to provide more detailed and accurate data than the AAO website. On the other hand, the AAO website has the advantage of providing information that is more understandable and practical. When patient preferences are not taken into account, generalised or biased information can decrease reliability.</p>","PeriodicalId":10214,"journal":{"name":"Clinical and Experimental Optometry","volume":" ","pages":"56-64"},"PeriodicalIF":1.7000,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Clinical and Experimental Optometry","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1080/08164622.2023.2298812","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/1/9 0:00:00","PubModel":"Epub","JCR":"Q3","JCRName":"OPHTHALMOLOGY","Score":null,"Total":0}
引用次数: 0
Abstract
Clinical relevance: Worldwide, millions suffer from cataracts, which impair vision and quality of life. Cataract education improves outcomes, satisfaction, and treatment adherence. Lack of health literacy, language and cultural barriers, personal preferences, and limited resources may all impede effective communication.
Background: AI can improve patient education by providing personalised, interactive, and accessible information tailored to patient understanding, interest, and motivation. AI chatbots can have human-like conversations and give advice on numerous topics.
Methods: This study investigated the efficacy of chatbots in cataract patient education relative to traditional resources like the AAO website, focusing on information accuracy,understandability, actionability, and readability. A descriptive comparative design was used to analyse quantitative data from frequently asked questions about cataracts answered by ChatGPT, Bard, Bing AI, and the AAO website. SOLO taxonomy, PEMAT, and the Flesch-Kincaid ease score were used to collect and analyse the data.
Results: Chatbots scored higher than AAO website on cataract-related questions in terms of accuracy (mean SOLO score ChatGPT: 3.1 ± 0.31, Bard: 2.9 ± 0.72, Bing AI: 2.65 ± 0.49, AAO website: 2.4 ± 0.6, (p < 0.001)). For understandability (mean PEMAT-U score AAO website: 0,89 ± 0,04, ChatGPT 0,84 ± 0,02, Bard: 0,84 ± 0,02, Bing AI: 0,81 ± 0,02, (p < 0.001)), and actionability (mean PEMAT-A score ChatGPT: 0.86 ± 0.03, Bard: 0.85 ± 0.06, Bing AI: 0.81 ± 0.05, AAO website: 0.81 ± 0.06, (p < 0.001)) AAO website scored better than chatbots. Flesch-Kincaid readability ease analysis showed that Bard (55,5 ± 8,48) had the highest mean score, followed by AAO website (51,96 ± 12,46), Bing AI (41,77 ± 9,53), and ChatGPT (34,38 ± 9,75, (p < 0.001)).
Conclusion: Chatbots have the potential to provide more detailed and accurate data than the AAO website. On the other hand, the AAO website has the advantage of providing information that is more understandable and practical. When patient preferences are not taken into account, generalised or biased information can decrease reliability.
期刊介绍:
Clinical and Experimental Optometry is a peer reviewed journal listed by ISI and abstracted by PubMed, Web of Science, Scopus, Science Citation Index and Current Contents. It publishes original research papers and reviews in clinical optometry and vision science. Debate and discussion of controversial scientific and clinical issues is encouraged and letters to the Editor and short communications expressing points of view on matters within the Journal''s areas of interest are welcome. The Journal is published six times annually.