永远不要相信任何可以独立思考的东西,如果你不能控制它的隐私设置:机器人的隐私设置对用户态度和自我披露意愿的影响

IF 3.8 2区 计算机科学 Q2 ROBOTICS
Julia G. Stapels, Angelika Penner, Niels Diekmann, Friederike Eyssel
{"title":"永远不要相信任何可以独立思考的东西,如果你不能控制它的隐私设置:机器人的隐私设置对用户态度和自我披露意愿的影响","authors":"Julia G. Stapels, Angelika Penner, Niels Diekmann, Friederike Eyssel","doi":"10.1007/s12369-023-01043-8","DOIUrl":null,"url":null,"abstract":"Abstract When encountering social robots, potential users are often facing a dilemma between privacy and utility. That is, high utility often comes at the cost of lenient privacy settings, allowing the robot to store personal data and to connect to the internet permanently, which brings in associated data security risks. However, to date, it still remains unclear how this dilemma affects attitudes and behavioral intentions towards the respective robot. To shed light on the influence of a social robot’s privacy settings on robot-related attitudes and behavioral intentions, we conducted two online experiments with a total sample of N = 320 German university students. We hypothesized that strict privacy settings compared to lenient privacy settings of a social robot would result in more favorable attitudes and behavioral intentions towards the robot in Experiment 1. For Experiment 2, we expected more favorable attitudes and behavioral intentions for choosing independently the robot’s privacy settings in comparison to evaluating preset privacy settings. However, those two manipulations seemed to influence attitudes towards the robot in diverging domains: While strict privacy settings increased trust, decreased subjective ambivalence and increased the willingness to self-disclose compared to lenient privacy settings, the choice of privacy settings seemed to primarily impact robot likeability, contact intentions and the depth of potential self-disclosure. Strict compared to lenient privacy settings might reduce the risk associated with robot contact and thereby also reduce risk-related attitudes and increase trust-dependent behavioral intentions. However, if allowed to choose, people make the robot ‘their own’, through making a privacy-utility tradeoff. This tradeoff is likely a compromise between full privacy and full utility and thus does not reduce risks of robot-contact as much as strict privacy settings do. Future experiments should replicate these results using real-life human robot interaction and different scenarios to further investigate the psychological mechanisms causing such divergences.","PeriodicalId":14361,"journal":{"name":"International Journal of Social Robotics","volume":"54 1","pages":"0"},"PeriodicalIF":3.8000,"publicationDate":"2023-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Never Trust Anything That Can Think for Itself, if You Can’t Control Its Privacy Settings: The Influence of a Robot’s Privacy Settings on Users’ Attitudes and Willingness to Self-disclose\",\"authors\":\"Julia G. Stapels, Angelika Penner, Niels Diekmann, Friederike Eyssel\",\"doi\":\"10.1007/s12369-023-01043-8\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Abstract When encountering social robots, potential users are often facing a dilemma between privacy and utility. That is, high utility often comes at the cost of lenient privacy settings, allowing the robot to store personal data and to connect to the internet permanently, which brings in associated data security risks. However, to date, it still remains unclear how this dilemma affects attitudes and behavioral intentions towards the respective robot. To shed light on the influence of a social robot’s privacy settings on robot-related attitudes and behavioral intentions, we conducted two online experiments with a total sample of N = 320 German university students. We hypothesized that strict privacy settings compared to lenient privacy settings of a social robot would result in more favorable attitudes and behavioral intentions towards the robot in Experiment 1. For Experiment 2, we expected more favorable attitudes and behavioral intentions for choosing independently the robot’s privacy settings in comparison to evaluating preset privacy settings. However, those two manipulations seemed to influence attitudes towards the robot in diverging domains: While strict privacy settings increased trust, decreased subjective ambivalence and increased the willingness to self-disclose compared to lenient privacy settings, the choice of privacy settings seemed to primarily impact robot likeability, contact intentions and the depth of potential self-disclosure. Strict compared to lenient privacy settings might reduce the risk associated with robot contact and thereby also reduce risk-related attitudes and increase trust-dependent behavioral intentions. However, if allowed to choose, people make the robot ‘their own’, through making a privacy-utility tradeoff. This tradeoff is likely a compromise between full privacy and full utility and thus does not reduce risks of robot-contact as much as strict privacy settings do. Future experiments should replicate these results using real-life human robot interaction and different scenarios to further investigate the psychological mechanisms causing such divergences.\",\"PeriodicalId\":14361,\"journal\":{\"name\":\"International Journal of Social Robotics\",\"volume\":\"54 1\",\"pages\":\"0\"},\"PeriodicalIF\":3.8000,\"publicationDate\":\"2023-09-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Social Robotics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1007/s12369-023-01043-8\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ROBOTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Social Robotics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s12369-023-01043-8","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ROBOTICS","Score":null,"Total":0}
引用次数: 1

摘要

在面对社交机器人时,潜在用户往往面临隐私与实用的两难选择。也就是说,高实用性往往是以宽松的隐私设置为代价的,允许机器人存储个人数据并永久连接到互联网,这带来了相关的数据安全风险。然而,到目前为止,人们仍然不清楚这种困境如何影响人们对各自机器人的态度和行为意图。为了阐明社交机器人的隐私设置对机器人相关态度和行为意图的影响,我们对总共N = 320名德国大学生进行了两次在线实验。在实验1中,我们假设严格的隐私设置比宽松的隐私设置会导致对社交机器人更有利的态度和行为意图。在实验2中,与评估预设隐私设置相比,我们期望机器人在自主选择隐私设置时表现出更有利的态度和行为意图。然而,这两种操作似乎影响了不同领域对机器人的态度:与宽松的隐私设置相比,严格的隐私设置增加了信任,减少了主观矛盾心理,增加了自我披露的意愿,隐私设置的选择似乎主要影响机器人的受欢迎程度、接触意图和潜在自我披露的深度。与宽松的隐私设置相比,严格的隐私设置可能会降低与机器人接触相关的风险,从而也会降低与风险相关的态度,并增加依赖信任的行为意图。然而,如果允许选择,人们会通过权衡隐私和效用,让机器人成为“自己的”。这种权衡很可能是完全隐私和完全实用之间的妥协,因此不会像严格的隐私设置那样减少机器人接触的风险。未来的实验应该通过真实的人机交互和不同的场景来复制这些结果,以进一步研究导致这种分歧的心理机制。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Never Trust Anything That Can Think for Itself, if You Can’t Control Its Privacy Settings: The Influence of a Robot’s Privacy Settings on Users’ Attitudes and Willingness to Self-disclose
Abstract When encountering social robots, potential users are often facing a dilemma between privacy and utility. That is, high utility often comes at the cost of lenient privacy settings, allowing the robot to store personal data and to connect to the internet permanently, which brings in associated data security risks. However, to date, it still remains unclear how this dilemma affects attitudes and behavioral intentions towards the respective robot. To shed light on the influence of a social robot’s privacy settings on robot-related attitudes and behavioral intentions, we conducted two online experiments with a total sample of N = 320 German university students. We hypothesized that strict privacy settings compared to lenient privacy settings of a social robot would result in more favorable attitudes and behavioral intentions towards the robot in Experiment 1. For Experiment 2, we expected more favorable attitudes and behavioral intentions for choosing independently the robot’s privacy settings in comparison to evaluating preset privacy settings. However, those two manipulations seemed to influence attitudes towards the robot in diverging domains: While strict privacy settings increased trust, decreased subjective ambivalence and increased the willingness to self-disclose compared to lenient privacy settings, the choice of privacy settings seemed to primarily impact robot likeability, contact intentions and the depth of potential self-disclosure. Strict compared to lenient privacy settings might reduce the risk associated with robot contact and thereby also reduce risk-related attitudes and increase trust-dependent behavioral intentions. However, if allowed to choose, people make the robot ‘their own’, through making a privacy-utility tradeoff. This tradeoff is likely a compromise between full privacy and full utility and thus does not reduce risks of robot-contact as much as strict privacy settings do. Future experiments should replicate these results using real-life human robot interaction and different scenarios to further investigate the psychological mechanisms causing such divergences.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
9.80
自引率
8.50%
发文量
95
期刊介绍: Social Robotics is the study of robots that are able to interact and communicate among themselves, with humans, and with the environment, within the social and cultural structure attached to its role. The journal covers a broad spectrum of topics related to the latest technologies, new research results and developments in the area of social robotics on all levels, from developments in core enabling technologies to system integration, aesthetic design, applications and social implications. It provides a platform for like-minded researchers to present their findings and latest developments in social robotics, covering relevant advances in engineering, computing, arts and social sciences. The journal publishes original, peer reviewed articles and contributions on innovative ideas and concepts, new discoveries and improvements, as well as novel applications, by leading researchers and developers regarding the latest fundamental advances in the core technologies that form the backbone of social robotics, distinguished developmental projects in the area, as well as seminal works in aesthetic design, ethics and philosophy, studies on social impact and influence, pertaining to social robotics.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信