{"title":"利用聊天机器人消除老年人的健康误导:参与式设计研究。","authors":"Wei Peng, Hee Rin Lee, Sue Lim","doi":"10.2196/60712","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Older adults, a population particularly susceptible to misinformation, may experience attempts at health-related scams or defrauding, and they may unknowingly spread misinformation. Previous research has investigated managing misinformation through media literacy education or supporting users by fact-checking information and cautioning for potential misinformation content, yet studies focusing on older adults are limited. Chatbots have the potential to educate and support older adults in misinformation management. However, many studies focusing on designing technology for older adults use the needs-based approach and consider aging as a deficit, leading to issues in technology adoption. Instead, we adopted the asset-based approach, inviting older adults to be active collaborators in envisioning how intelligent technologies can enhance their misinformation management practices.</p><p><strong>Objective: </strong>This study aims to understand how older adults may use chatbots' capabilities for misinformation management.</p><p><strong>Methods: </strong>We conducted 5 participatory design workshops with a total of 17 older adult participants to ideate ways in which chatbots can help them manage misinformation. The workshops included 3 stages: developing scenarios reflecting older adults' encounters with misinformation in their lives, understanding existing chatbot platforms, and envisioning how chatbots can help intervene in the scenarios from stage 1.</p><p><strong>Results: </strong>We found that issues with older adults' misinformation management arose more from interpersonal relationships than individuals' ability to detect misinformation in pieces of information. This finding underscored the importance of chatbots to act as mediators that facilitate communication and help resolve conflict. In addition, participants emphasized the importance of autonomy. They desired chatbots to teach them to navigate the information landscape and come to conclusions about misinformation on their own. Finally, we found that older adults' distrust in IT companies and governments' ability to regulate the IT industry affected their trust in chatbots. Thus, chatbot designers should consider using well-trusted sources and practicing transparency to increase older adults' trust in the chatbot-based tools. Overall, our results highlight the need for chatbot-based misinformation tools to go beyond fact checking.</p><p><strong>Conclusions: </strong>This study provides insights for how chatbots can be designed as part of technological systems for misinformation management among older adults. Our study underscores the importance of inviting older adults to be active co-designers of chatbot-based interventions.</p>","PeriodicalId":14841,"journal":{"name":"JMIR Formative Research","volume":null,"pages":null},"PeriodicalIF":2.0000,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11512138/pdf/","citationCount":"0","resultStr":"{\"title\":\"Leveraging Chatbots to Combat Health Misinformation for Older Adults: Participatory Design Study.\",\"authors\":\"Wei Peng, Hee Rin Lee, Sue Lim\",\"doi\":\"10.2196/60712\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>Older adults, a population particularly susceptible to misinformation, may experience attempts at health-related scams or defrauding, and they may unknowingly spread misinformation. Previous research has investigated managing misinformation through media literacy education or supporting users by fact-checking information and cautioning for potential misinformation content, yet studies focusing on older adults are limited. Chatbots have the potential to educate and support older adults in misinformation management. However, many studies focusing on designing technology for older adults use the needs-based approach and consider aging as a deficit, leading to issues in technology adoption. Instead, we adopted the asset-based approach, inviting older adults to be active collaborators in envisioning how intelligent technologies can enhance their misinformation management practices.</p><p><strong>Objective: </strong>This study aims to understand how older adults may use chatbots' capabilities for misinformation management.</p><p><strong>Methods: </strong>We conducted 5 participatory design workshops with a total of 17 older adult participants to ideate ways in which chatbots can help them manage misinformation. The workshops included 3 stages: developing scenarios reflecting older adults' encounters with misinformation in their lives, understanding existing chatbot platforms, and envisioning how chatbots can help intervene in the scenarios from stage 1.</p><p><strong>Results: </strong>We found that issues with older adults' misinformation management arose more from interpersonal relationships than individuals' ability to detect misinformation in pieces of information. This finding underscored the importance of chatbots to act as mediators that facilitate communication and help resolve conflict. In addition, participants emphasized the importance of autonomy. They desired chatbots to teach them to navigate the information landscape and come to conclusions about misinformation on their own. Finally, we found that older adults' distrust in IT companies and governments' ability to regulate the IT industry affected their trust in chatbots. Thus, chatbot designers should consider using well-trusted sources and practicing transparency to increase older adults' trust in the chatbot-based tools. Overall, our results highlight the need for chatbot-based misinformation tools to go beyond fact checking.</p><p><strong>Conclusions: </strong>This study provides insights for how chatbots can be designed as part of technological systems for misinformation management among older adults. Our study underscores the importance of inviting older adults to be active co-designers of chatbot-based interventions.</p>\",\"PeriodicalId\":14841,\"journal\":{\"name\":\"JMIR Formative Research\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.0000,\"publicationDate\":\"2024-10-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11512138/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"JMIR Formative Research\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.2196/60712\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"HEALTH CARE SCIENCES & SERVICES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"JMIR Formative Research","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2196/60712","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
引用次数: 0
摘要
背景:老年人是特别容易受到错误信息影响的人群,他们可能会遇到与健康相关的诈骗或欺诈行为,也可能会在不知情的情况下传播错误信息。以前的研究曾调查过通过媒体扫盲教育来管理误导信息,或通过事实核查信息和提醒潜在的误导信息内容来支持用户,但针对老年人的研究还很有限。聊天机器人有可能在误导信息管理方面为老年人提供教育和支持。然而,许多关于为老年人设计技术的研究都采用了基于需求的方法,并将老龄化视为一种缺陷,从而导致技术应用方面的问题。相反,我们采用了以资产为基础的方法,邀请老年人成为积极的合作者,共同设想智能技术如何加强他们的错误信息管理实践:本研究旨在了解老年人如何利用聊天机器人的功能进行错误信息的管理:我们举办了 5 次参与式设计研讨会,共有 17 位老年人参加,共同探讨聊天机器人如何帮助他们管理错误信息。研讨会包括 3 个阶段:开发反映老年人在生活中遇到错误信息的场景,了解现有聊天机器人平台,设想聊天机器人如何帮助干预第 1 阶段的场景:我们发现,老年人管理错误信息的问题更多是由人际关系引起的,而不是个人从信息中发现错误信息的能力。这一发现强调了聊天机器人作为调解人促进沟通和帮助解决冲突的重要性。此外,参与者还强调了自主的重要性。他们希望聊天机器人能教他们浏览信息,自己对错误信息得出结论。最后,我们发现老年人对 IT 公司和政府监管 IT 行业能力的不信任影响了他们对聊天机器人的信任。因此,聊天机器人设计者应考虑使用可信赖的信息来源并提高透明度,以增加老年人对聊天机器人工具的信任。总之,我们的研究结果突出表明,基于聊天机器人的误导信息工具需要超越事实核查:本研究为如何将聊天机器人设计成老年人误导信息管理技术系统的一部分提供了启示。我们的研究强调了邀请老年人成为基于聊天机器人的干预措施的积极共同设计者的重要性。
Leveraging Chatbots to Combat Health Misinformation for Older Adults: Participatory Design Study.
Background: Older adults, a population particularly susceptible to misinformation, may experience attempts at health-related scams or defrauding, and they may unknowingly spread misinformation. Previous research has investigated managing misinformation through media literacy education or supporting users by fact-checking information and cautioning for potential misinformation content, yet studies focusing on older adults are limited. Chatbots have the potential to educate and support older adults in misinformation management. However, many studies focusing on designing technology for older adults use the needs-based approach and consider aging as a deficit, leading to issues in technology adoption. Instead, we adopted the asset-based approach, inviting older adults to be active collaborators in envisioning how intelligent technologies can enhance their misinformation management practices.
Objective: This study aims to understand how older adults may use chatbots' capabilities for misinformation management.
Methods: We conducted 5 participatory design workshops with a total of 17 older adult participants to ideate ways in which chatbots can help them manage misinformation. The workshops included 3 stages: developing scenarios reflecting older adults' encounters with misinformation in their lives, understanding existing chatbot platforms, and envisioning how chatbots can help intervene in the scenarios from stage 1.
Results: We found that issues with older adults' misinformation management arose more from interpersonal relationships than individuals' ability to detect misinformation in pieces of information. This finding underscored the importance of chatbots to act as mediators that facilitate communication and help resolve conflict. In addition, participants emphasized the importance of autonomy. They desired chatbots to teach them to navigate the information landscape and come to conclusions about misinformation on their own. Finally, we found that older adults' distrust in IT companies and governments' ability to regulate the IT industry affected their trust in chatbots. Thus, chatbot designers should consider using well-trusted sources and practicing transparency to increase older adults' trust in the chatbot-based tools. Overall, our results highlight the need for chatbot-based misinformation tools to go beyond fact checking.
Conclusions: This study provides insights for how chatbots can be designed as part of technological systems for misinformation management among older adults. Our study underscores the importance of inviting older adults to be active co-designers of chatbot-based interventions.