Alex John London;Yosef S. Razin;Jason Borenstein;Motahhare Eslami;Russell Perkins;Paul Robinette
{"title":"面向老年人的近未来社交支持型智能助理的伦理问题","authors":"Alex John London;Yosef S. Razin;Jason Borenstein;Motahhare Eslami;Russell Perkins;Paul Robinette","doi":"10.1109/TTS.2023.3237124","DOIUrl":null,"url":null,"abstract":"This paper considers novel ethical issues pertaining to near-future artificial intelligence (AI) systems that seek to support, maintain, or enhance the capabilities of older adults as they age and experience cognitive decline. In particular, we focus on smart assistants (SAs) that would seek to provide proactive assistance and mediate social interactions between users and other members of their social or support networks. Such systems would potentially have significant utility for users and their caregivers if they could reduce the cognitive load for tasks that help older adults maintain their autonomy and independence. However, proactively supporting even simple tasks, such as providing the user with a summary of a meeting or a conversation, would require a future SA to engage with ethical aspects of human interactions which computational systems currently have difficulty identifying, tracking, and navigating. If SAs fail to perceive ethically relevant aspects of social interactions, the resulting deficit in moral discernment would threaten important aspects of user autonomy and well-being. After describing the dynamic that generates these ethical challenges, we note how simple strategies for prompting user oversight of such systems might also undermine their utility. We conclude by considering how near-future SAs could exacerbate current worries about privacy, commodification of users, trust calibration and injustice.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"4 4","pages":"291-301"},"PeriodicalIF":0.0000,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Ethical Issues in Near-Future Socially Supportive Smart Assistants for Older Adults\",\"authors\":\"Alex John London;Yosef S. Razin;Jason Borenstein;Motahhare Eslami;Russell Perkins;Paul Robinette\",\"doi\":\"10.1109/TTS.2023.3237124\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper considers novel ethical issues pertaining to near-future artificial intelligence (AI) systems that seek to support, maintain, or enhance the capabilities of older adults as they age and experience cognitive decline. In particular, we focus on smart assistants (SAs) that would seek to provide proactive assistance and mediate social interactions between users and other members of their social or support networks. Such systems would potentially have significant utility for users and their caregivers if they could reduce the cognitive load for tasks that help older adults maintain their autonomy and independence. However, proactively supporting even simple tasks, such as providing the user with a summary of a meeting or a conversation, would require a future SA to engage with ethical aspects of human interactions which computational systems currently have difficulty identifying, tracking, and navigating. If SAs fail to perceive ethically relevant aspects of social interactions, the resulting deficit in moral discernment would threaten important aspects of user autonomy and well-being. After describing the dynamic that generates these ethical challenges, we note how simple strategies for prompting user oversight of such systems might also undermine their utility. We conclude by considering how near-future SAs could exacerbate current worries about privacy, commodification of users, trust calibration and injustice.\",\"PeriodicalId\":73324,\"journal\":{\"name\":\"IEEE transactions on technology and society\",\"volume\":\"4 4\",\"pages\":\"291-301\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-01-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on technology and society\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10017383/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on technology and society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10017383/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
本文探讨了与近未来人工智能(AI)系统有关的新伦理问题,这些系统旨在支持、维持或增强老年人在衰老和认知能力衰退时的能力。我们尤其关注智能助理(SAs),它们将寻求提供积极主动的帮助,并调解用户与其社交或支持网络中其他成员之间的社交互动。如果这类系统能减轻老年人在执行任务时的认知负担,帮助他们保持自主性和独立性,那么它们将对用户及其护理人员产生巨大的潜在效用。然而,即使是简单的任务,例如为用户提供会议或谈话摘要,也需要未来的 SA 参与人类互动的伦理方面,而目前的计算系统很难识别、跟踪和导航这些方面。如果 SA 无法感知社会互动中与道德相关的方面,那么由此产生的道德辨别力缺陷将威胁到用户自主性和福祉的重要方面。在描述了产生这些道德挑战的动力之后,我们注意到促使用户监督此类系统的简单策略也可能会削弱它们的效用。最后,我们考虑了不久的将来,智能系统会如何加剧当前对隐私、用户商品化、信任校准和不公正的担忧。
Ethical Issues in Near-Future Socially Supportive Smart Assistants for Older Adults
This paper considers novel ethical issues pertaining to near-future artificial intelligence (AI) systems that seek to support, maintain, or enhance the capabilities of older adults as they age and experience cognitive decline. In particular, we focus on smart assistants (SAs) that would seek to provide proactive assistance and mediate social interactions between users and other members of their social or support networks. Such systems would potentially have significant utility for users and their caregivers if they could reduce the cognitive load for tasks that help older adults maintain their autonomy and independence. However, proactively supporting even simple tasks, such as providing the user with a summary of a meeting or a conversation, would require a future SA to engage with ethical aspects of human interactions which computational systems currently have difficulty identifying, tracking, and navigating. If SAs fail to perceive ethically relevant aspects of social interactions, the resulting deficit in moral discernment would threaten important aspects of user autonomy and well-being. After describing the dynamic that generates these ethical challenges, we note how simple strategies for prompting user oversight of such systems might also undermine their utility. We conclude by considering how near-future SAs could exacerbate current worries about privacy, commodification of users, trust calibration and injustice.