{"title":"会说话的技术:将聊天机器人作为白内障患者教育工具的探索。","authors":"I Brahim Edhem Yılmaz, Levent Doğan","doi":"10.1080/08164622.2023.2298812","DOIUrl":null,"url":null,"abstract":"<p><strong>Clinical relevance: </strong>Worldwide, millions suffer from cataracts, which impair vision and quality of life. Cataract education improves outcomes, satisfaction, and treatment adherence. Lack of health literacy, language and cultural barriers, personal preferences, and limited resources may all impede effective communication.</p><p><strong>Background: </strong>AI can improve patient education by providing personalised, interactive, and accessible information tailored to patient understanding, interest, and motivation. AI chatbots can have human-like conversations and give advice on numerous topics.</p><p><strong>Methods: </strong>This study investigated the efficacy of chatbots in cataract patient education relative to traditional resources like the AAO website, focusing on information accuracy,understandability, actionability, and readability. A descriptive comparative design was used to analyse quantitative data from frequently asked questions about cataracts answered by ChatGPT, Bard, Bing AI, and the AAO website. SOLO taxonomy, PEMAT, and the Flesch-Kincaid ease score were used to collect and analyse the data.</p><p><strong>Results: </strong>Chatbots scored higher than AAO website on cataract-related questions in terms of accuracy (mean SOLO score ChatGPT: 3.1 ± 0.31, Bard: 2.9 ± 0.72, Bing AI: 2.65 ± 0.49, AAO website: 2.4 ± 0.6, (<i>p</i> < 0.001)). For understandability (mean PEMAT-U score AAO website: 0,89 ± 0,04, ChatGPT 0,84 ± 0,02, Bard: 0,84 ± 0,02, Bing AI: 0,81 ± 0,02, (<i>p</i> < 0.001)), and actionability (mean PEMAT-A score ChatGPT: 0.86 ± 0.03, Bard: 0.85 ± 0.06, Bing AI: 0.81 ± 0.05, AAO website: 0.81 ± 0.06, (<i>p</i> < 0.001)) AAO website scored better than chatbots. Flesch-Kincaid readability ease analysis showed that Bard (55,5 ± 8,48) had the highest mean score, followed by AAO website (51,96 ± 12,46), Bing AI (41,77 ± 9,53), and ChatGPT (34,38 ± 9,75, (<i>p</i> < 0.001)).</p><p><strong>Conclusion: </strong>Chatbots have the potential to provide more detailed and accurate data than the AAO website. On the other hand, the AAO website has the advantage of providing information that is more understandable and practical. When patient preferences are not taken into account, generalised or biased information can decrease reliability.</p>","PeriodicalId":10214,"journal":{"name":"Clinical and Experimental Optometry","volume":" ","pages":"56-64"},"PeriodicalIF":1.7000,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Talking technology: exploring chatbots as a tool for cataract patient education.\",\"authors\":\"I Brahim Edhem Yılmaz, Levent Doğan\",\"doi\":\"10.1080/08164622.2023.2298812\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Clinical relevance: </strong>Worldwide, millions suffer from cataracts, which impair vision and quality of life. Cataract education improves outcomes, satisfaction, and treatment adherence. Lack of health literacy, language and cultural barriers, personal preferences, and limited resources may all impede effective communication.</p><p><strong>Background: </strong>AI can improve patient education by providing personalised, interactive, and accessible information tailored to patient understanding, interest, and motivation. AI chatbots can have human-like conversations and give advice on numerous topics.</p><p><strong>Methods: </strong>This study investigated the efficacy of chatbots in cataract patient education relative to traditional resources like the AAO website, focusing on information accuracy,understandability, actionability, and readability. A descriptive comparative design was used to analyse quantitative data from frequently asked questions about cataracts answered by ChatGPT, Bard, Bing AI, and the AAO website. SOLO taxonomy, PEMAT, and the Flesch-Kincaid ease score were used to collect and analyse the data.</p><p><strong>Results: </strong>Chatbots scored higher than AAO website on cataract-related questions in terms of accuracy (mean SOLO score ChatGPT: 3.1 ± 0.31, Bard: 2.9 ± 0.72, Bing AI: 2.65 ± 0.49, AAO website: 2.4 ± 0.6, (<i>p</i> < 0.001)). For understandability (mean PEMAT-U score AAO website: 0,89 ± 0,04, ChatGPT 0,84 ± 0,02, Bard: 0,84 ± 0,02, Bing AI: 0,81 ± 0,02, (<i>p</i> < 0.001)), and actionability (mean PEMAT-A score ChatGPT: 0.86 ± 0.03, Bard: 0.85 ± 0.06, Bing AI: 0.81 ± 0.05, AAO website: 0.81 ± 0.06, (<i>p</i> < 0.001)) AAO website scored better than chatbots. Flesch-Kincaid readability ease analysis showed that Bard (55,5 ± 8,48) had the highest mean score, followed by AAO website (51,96 ± 12,46), Bing AI (41,77 ± 9,53), and ChatGPT (34,38 ± 9,75, (<i>p</i> < 0.001)).</p><p><strong>Conclusion: </strong>Chatbots have the potential to provide more detailed and accurate data than the AAO website. On the other hand, the AAO website has the advantage of providing information that is more understandable and practical. When patient preferences are not taken into account, generalised or biased information can decrease reliability.</p>\",\"PeriodicalId\":10214,\"journal\":{\"name\":\"Clinical and Experimental Optometry\",\"volume\":\" \",\"pages\":\"56-64\"},\"PeriodicalIF\":1.7000,\"publicationDate\":\"2025-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Clinical and Experimental Optometry\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1080/08164622.2023.2298812\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/1/9 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q3\",\"JCRName\":\"OPHTHALMOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Clinical and Experimental Optometry","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1080/08164622.2023.2298812","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/1/9 0:00:00","PubModel":"Epub","JCR":"Q3","JCRName":"OPHTHALMOLOGY","Score":null,"Total":0}
引用次数: 0
摘要
临床意义:全世界有数百万白内障患者,他们的视力和生活质量都受到了影响。白内障教育可提高治疗效果、满意度和治疗依从性。缺乏健康知识、语言和文化障碍、个人偏好以及有限的资源都可能阻碍有效的沟通:背景:人工智能可以根据患者的理解、兴趣和动机提供个性化、交互式和可获取的信息,从而改善患者教育。人工智能聊天机器人可以进行类似人类的对话,并就众多话题提供建议:本研究调查了聊天机器人在白内障患者教育中相对于 AAO 网站等传统资源的功效,重点关注信息的准确性、可理解性、可操作性和可读性。研究采用描述性比较设计,对 ChatGPT、Bard、Bing AI 和 AAO 网站回答的白内障常见问题的定量数据进行分析。收集和分析数据时使用了 SOLO 分类法、PEMAT 和 Flesch-Kincaid 易读性评分:结果:就准确性而言,聊天机器人在白内障相关问题上的得分高于 AAO 网站(平均 SOLO 得分 ChatGPT:3.1 ± 0.31,Bard:2.9 ± 0.72,Bing AI:2.65 ± 0.49,AAO 网站:2.4 ± 0.6,Bard:2.4 ± 0.6):2.4±0.6,(P P P P P 结论:与 AAO 网站相比,聊天机器人有可能提供更详细、更准确的数据。另一方面,AAO 网站的优势在于提供的信息更易懂、更实用。如果不考虑患者的偏好,笼统或有偏见的信息会降低可靠性。
Talking technology: exploring chatbots as a tool for cataract patient education.
Clinical relevance: Worldwide, millions suffer from cataracts, which impair vision and quality of life. Cataract education improves outcomes, satisfaction, and treatment adherence. Lack of health literacy, language and cultural barriers, personal preferences, and limited resources may all impede effective communication.
Background: AI can improve patient education by providing personalised, interactive, and accessible information tailored to patient understanding, interest, and motivation. AI chatbots can have human-like conversations and give advice on numerous topics.
Methods: This study investigated the efficacy of chatbots in cataract patient education relative to traditional resources like the AAO website, focusing on information accuracy,understandability, actionability, and readability. A descriptive comparative design was used to analyse quantitative data from frequently asked questions about cataracts answered by ChatGPT, Bard, Bing AI, and the AAO website. SOLO taxonomy, PEMAT, and the Flesch-Kincaid ease score were used to collect and analyse the data.
Results: Chatbots scored higher than AAO website on cataract-related questions in terms of accuracy (mean SOLO score ChatGPT: 3.1 ± 0.31, Bard: 2.9 ± 0.72, Bing AI: 2.65 ± 0.49, AAO website: 2.4 ± 0.6, (p < 0.001)). For understandability (mean PEMAT-U score AAO website: 0,89 ± 0,04, ChatGPT 0,84 ± 0,02, Bard: 0,84 ± 0,02, Bing AI: 0,81 ± 0,02, (p < 0.001)), and actionability (mean PEMAT-A score ChatGPT: 0.86 ± 0.03, Bard: 0.85 ± 0.06, Bing AI: 0.81 ± 0.05, AAO website: 0.81 ± 0.06, (p < 0.001)) AAO website scored better than chatbots. Flesch-Kincaid readability ease analysis showed that Bard (55,5 ± 8,48) had the highest mean score, followed by AAO website (51,96 ± 12,46), Bing AI (41,77 ± 9,53), and ChatGPT (34,38 ± 9,75, (p < 0.001)).
Conclusion: Chatbots have the potential to provide more detailed and accurate data than the AAO website. On the other hand, the AAO website has the advantage of providing information that is more understandable and practical. When patient preferences are not taken into account, generalised or biased information can decrease reliability.
期刊介绍:
Clinical and Experimental Optometry is a peer reviewed journal listed by ISI and abstracted by PubMed, Web of Science, Scopus, Science Citation Index and Current Contents. It publishes original research papers and reviews in clinical optometry and vision science. Debate and discussion of controversial scientific and clinical issues is encouraged and letters to the Editor and short communications expressing points of view on matters within the Journal''s areas of interest are welcome. The Journal is published six times annually.