{"title":"护士对人工ıntelligence聊天机器人健康素养教育的评价。","authors":"Gulsum Asiksoy","doi":"10.4103/jehp.jehp_1195_24","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Artificial intelligence (AI)-powered chatbots are emerging as a new tool in healthcare, offering the potential to provide patients with information and support. Despite their growing presence, there are concerns regarding the medical reliability of the information they provide and the potential risks to patient safety.</p><p><strong>Material and methods: </strong>The aim of this study is to assess the medical reliability of responses to health-related questions provided by an AI-powered chatbot and to evaluate the risks to patient safety. The study is designed using a mixed-methods phenomenology approach. The participants are 44 nurses working at a private hospital in Cyprus. Data collection was conducted via survey forms and focus group discussions. Quantitative data were analyzed using descriptive statistics, while qualitative data were examined using content analysis.</p><p><strong>Results: </strong>The results indicate that according to the nurses' evaluations, the medical reliability of the AI chatbot's responses is generally high. However, instances of incorrect or incomplete information were also noted. Specifically, the quantitative analysis showed that a majority of the nurses found the chatbot's responses to be accurate and useful. The qualitative analysis revealed concerns about the potential for the chatbot to misdirect patients or contribute to diagnostic errors. These risks highlight the importance of monitoring and improving the AI systems to minimize errors and enhance reliability.</p><p><strong>Conclusion: </strong>AI chatbots can provide valuable information and support to patients, improving accessibility and engagement in healthcare. However, concerns about medical reliability and patient safety remain. Continuous evaluation and improvement of these systems are necessary, alongside efforts to enhance patients' health literacy to help them accurately assess information from AI chatbots.</p>","PeriodicalId":15581,"journal":{"name":"Journal of Education and Health Promotion","volume":"14 ","pages":"128"},"PeriodicalIF":1.4000,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12017437/pdf/","citationCount":"0","resultStr":"{\"title\":\"Nurses' assessment of artificial ıntelligence chatbots for health literacy education.\",\"authors\":\"Gulsum Asiksoy\",\"doi\":\"10.4103/jehp.jehp_1195_24\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>Artificial intelligence (AI)-powered chatbots are emerging as a new tool in healthcare, offering the potential to provide patients with information and support. Despite their growing presence, there are concerns regarding the medical reliability of the information they provide and the potential risks to patient safety.</p><p><strong>Material and methods: </strong>The aim of this study is to assess the medical reliability of responses to health-related questions provided by an AI-powered chatbot and to evaluate the risks to patient safety. The study is designed using a mixed-methods phenomenology approach. The participants are 44 nurses working at a private hospital in Cyprus. Data collection was conducted via survey forms and focus group discussions. Quantitative data were analyzed using descriptive statistics, while qualitative data were examined using content analysis.</p><p><strong>Results: </strong>The results indicate that according to the nurses' evaluations, the medical reliability of the AI chatbot's responses is generally high. However, instances of incorrect or incomplete information were also noted. Specifically, the quantitative analysis showed that a majority of the nurses found the chatbot's responses to be accurate and useful. The qualitative analysis revealed concerns about the potential for the chatbot to misdirect patients or contribute to diagnostic errors. These risks highlight the importance of monitoring and improving the AI systems to minimize errors and enhance reliability.</p><p><strong>Conclusion: </strong>AI chatbots can provide valuable information and support to patients, improving accessibility and engagement in healthcare. However, concerns about medical reliability and patient safety remain. Continuous evaluation and improvement of these systems are necessary, alongside efforts to enhance patients' health literacy to help them accurately assess information from AI chatbots.</p>\",\"PeriodicalId\":15581,\"journal\":{\"name\":\"Journal of Education and Health Promotion\",\"volume\":\"14 \",\"pages\":\"128\"},\"PeriodicalIF\":1.4000,\"publicationDate\":\"2025-03-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12017437/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Education and Health Promotion\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.4103/jehp.jehp_1195_24\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/1/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q3\",\"JCRName\":\"EDUCATION, SCIENTIFIC DISCIPLINES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Education and Health Promotion","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.4103/jehp.jehp_1195_24","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q3","JCRName":"EDUCATION, SCIENTIFIC DISCIPLINES","Score":null,"Total":0}
Nurses' assessment of artificial ıntelligence chatbots for health literacy education.
Background: Artificial intelligence (AI)-powered chatbots are emerging as a new tool in healthcare, offering the potential to provide patients with information and support. Despite their growing presence, there are concerns regarding the medical reliability of the information they provide and the potential risks to patient safety.
Material and methods: The aim of this study is to assess the medical reliability of responses to health-related questions provided by an AI-powered chatbot and to evaluate the risks to patient safety. The study is designed using a mixed-methods phenomenology approach. The participants are 44 nurses working at a private hospital in Cyprus. Data collection was conducted via survey forms and focus group discussions. Quantitative data were analyzed using descriptive statistics, while qualitative data were examined using content analysis.
Results: The results indicate that according to the nurses' evaluations, the medical reliability of the AI chatbot's responses is generally high. However, instances of incorrect or incomplete information were also noted. Specifically, the quantitative analysis showed that a majority of the nurses found the chatbot's responses to be accurate and useful. The qualitative analysis revealed concerns about the potential for the chatbot to misdirect patients or contribute to diagnostic errors. These risks highlight the importance of monitoring and improving the AI systems to minimize errors and enhance reliability.
Conclusion: AI chatbots can provide valuable information and support to patients, improving accessibility and engagement in healthcare. However, concerns about medical reliability and patient safety remain. Continuous evaluation and improvement of these systems are necessary, alongside efforts to enhance patients' health literacy to help them accurately assess information from AI chatbots.