Household Social Robots − Special Issues Relating to Data Protection

Réka Pusztahelyi, Ibolya Stefán
{"title":"Household Social Robots − Special Issues Relating to Data Protection","authors":"Réka Pusztahelyi, Ibolya Stefán","doi":"10.47745/ausleg.2022.11.1.06","DOIUrl":null,"url":null,"abstract":"Household social robots may have massive effects on our everyday lives and raise several concerns on data protection and privacy. The main characteristic of these devices is their capability of building close connections, even emotional bonds between humans and robots. The socially interactive robots exhibit human social characteristics, e.g. express and/or perceive emotions, communicate with high-level dialogue, etc. Affective computing permits development of AI systems that are capable of imitating human traits (emotions, speech, body language). The goal is to gain the trust of humans, to improve safety, and to strengthen emotional bonds between human and robot with the help of anthropomorphization. However, this emotional engagement may incentivize people to trade personal information jeopardizing their privacy. Social robots can infer from emotional expressions and gestures the feelings, physical and mental states of human beings. As a result, concerns may be raised regarding data protection, such as the classification of emotions, the issues of consent, and appearance of the right to explanation. The article proceeds in two main stages. The first chapter deals with general questions relating to emotional AI and social robots, focusing on the deceptive and manipulative nature that makes humans disclose more and more information and lull their privacy and data protection awareness. The second chapter serves to demonstrate several data protection problems such as the categorization and datafication of emotions (as biometrics), the issues of consent, and the appearance of the right to explanation. The third chapter highlights certain civil liability concerns regarding the infringement of the right to privacy in the light of the future EU civil liability regime for artificial intelligence.","PeriodicalId":419539,"journal":{"name":"Acta Universitatis Sapientiae, Legal Studies","volume":"4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Acta Universitatis Sapientiae, Legal Studies","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.47745/ausleg.2022.11.1.06","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

Household social robots may have massive effects on our everyday lives and raise several concerns on data protection and privacy. The main characteristic of these devices is their capability of building close connections, even emotional bonds between humans and robots. The socially interactive robots exhibit human social characteristics, e.g. express and/or perceive emotions, communicate with high-level dialogue, etc. Affective computing permits development of AI systems that are capable of imitating human traits (emotions, speech, body language). The goal is to gain the trust of humans, to improve safety, and to strengthen emotional bonds between human and robot with the help of anthropomorphization. However, this emotional engagement may incentivize people to trade personal information jeopardizing their privacy. Social robots can infer from emotional expressions and gestures the feelings, physical and mental states of human beings. As a result, concerns may be raised regarding data protection, such as the classification of emotions, the issues of consent, and appearance of the right to explanation. The article proceeds in two main stages. The first chapter deals with general questions relating to emotional AI and social robots, focusing on the deceptive and manipulative nature that makes humans disclose more and more information and lull their privacy and data protection awareness. The second chapter serves to demonstrate several data protection problems such as the categorization and datafication of emotions (as biometrics), the issues of consent, and the appearance of the right to explanation. The third chapter highlights certain civil liability concerns regarding the infringement of the right to privacy in the light of the future EU civil liability regime for artificial intelligence.
家庭社交机器人-与数据保护有关的特殊问题
家庭社交机器人可能会对我们的日常生活产生巨大影响,并引发人们对数据保护和隐私的担忧。这些设备的主要特点是它们能够在人类和机器人之间建立密切的联系,甚至是情感纽带。社会互动机器人展现人类的社会特征,例如表达和/或感知情感,进行高层对话等。情感计算允许开发能够模仿人类特征(情感、语音、肢体语言)的人工智能系统。目的是通过人格化,获得人类的信任,提高安全性,加强人与机器人之间的情感纽带。然而,这种情感投入可能会激励人们交易个人信息,损害他们的隐私。社交机器人可以从情感表达和手势中推断出人类的情感、身体和精神状态。因此,可能会对数据保护提出关注,例如情绪的分类、同意问题和解释权的外观。本文分两个主要阶段进行。第一章涉及情感人工智能和社交机器人的一般问题,重点是欺骗性和操纵性,使人类泄露越来越多的信息,麻痹他们的隐私和数据保护意识。第二章展示了几个数据保护问题,如情绪的分类和数据化(如生物识别技术)、同意问题和解释权的出现。第三章根据欧盟未来的人工智能民事责任制度,强调了有关侵犯隐私权的某些民事责任问题。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信