调查谷歌仪表盘的可解释性,以支持个人隐私决策

Maria Clara G. de Almeida, L. C. de Castro Salgado
{"title":"调查谷歌仪表盘的可解释性,以支持个人隐私决策","authors":"Maria Clara G. de Almeida, L. C. de Castro Salgado","doi":"10.1145/3357155.3358438","DOIUrl":null,"url":null,"abstract":"Advances in information technology often overwhelm users with complex privacy and security decisions. They make the collection and use of personal data quite invisible. In the current scenario, this data collection can introduce risks, manipulate and influence the decision making process. This research is based on concepts from an emerging field of study called Human Data Interaction (HDI), which proposes to include the human at the center of the data stream, providing mechanisms for citizens to interact explicitly with the collected data. We explored the explanation as a promising mechanism for transparency in automated systems. In the first step, we apply the Semiotic Inspection Method (SIM) longitudinally to investigate how using explanations as an interactive feature can help or prevent users from making privacy decisions on Google services. In the second step, we conducted an empirical study in which users are able to analyze whether these explanations are satisfactory and feel (un) secure in the decision making process. And by comparing the results of the two steps, we find that even in a large company like Google, the right to explanation is not guaranteed. Google does not make its data processing transparent to users, nor does it provide satisfactory explanations of how its services use individual data. Consequently, the lack of coherent, detailed and transparent explanations hamper users to make good and safe decisions.","PeriodicalId":237718,"journal":{"name":"Proceedings of the 18th Brazilian Symposium on Human Factors in Computing Systems","volume":"83 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Investigating Google dashboard's explainability to support individual privacy decision making\",\"authors\":\"Maria Clara G. de Almeida, L. C. de Castro Salgado\",\"doi\":\"10.1145/3357155.3358438\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Advances in information technology often overwhelm users with complex privacy and security decisions. They make the collection and use of personal data quite invisible. In the current scenario, this data collection can introduce risks, manipulate and influence the decision making process. This research is based on concepts from an emerging field of study called Human Data Interaction (HDI), which proposes to include the human at the center of the data stream, providing mechanisms for citizens to interact explicitly with the collected data. We explored the explanation as a promising mechanism for transparency in automated systems. In the first step, we apply the Semiotic Inspection Method (SIM) longitudinally to investigate how using explanations as an interactive feature can help or prevent users from making privacy decisions on Google services. In the second step, we conducted an empirical study in which users are able to analyze whether these explanations are satisfactory and feel (un) secure in the decision making process. And by comparing the results of the two steps, we find that even in a large company like Google, the right to explanation is not guaranteed. Google does not make its data processing transparent to users, nor does it provide satisfactory explanations of how its services use individual data. Consequently, the lack of coherent, detailed and transparent explanations hamper users to make good and safe decisions.\",\"PeriodicalId\":237718,\"journal\":{\"name\":\"Proceedings of the 18th Brazilian Symposium on Human Factors in Computing Systems\",\"volume\":\"83 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-10-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 18th Brazilian Symposium on Human Factors in Computing Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3357155.3358438\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 18th Brazilian Symposium on Human Factors in Computing Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3357155.3358438","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

信息技术的进步常常让用户难以做出复杂的隐私和安全决策。它们使个人数据的收集和使用变得完全不可见。在当前情景中,这种数据收集可能会引入风险,操纵和影响决策过程。这项研究基于一个新兴研究领域的概念,即人类数据交互(HDI),该研究建议将人类置于数据流的中心,为公民提供与收集到的数据进行明确交互的机制。我们探索了作为自动化系统透明度的一种有前途的机制的解释。在第一步中,我们纵向应用符号检验方法(SIM)来研究如何使用解释作为交互功能来帮助或阻止用户在谷歌服务上做出隐私决定。在第二步,我们进行了一个实证研究,其中用户能够分析这些解释是否令人满意,并在决策过程中感到(不)安全。并且通过比较两步的结果,我们发现即使在像Google这样的大公司,解释权也没有得到保证。谷歌没有让其数据处理对用户透明,也没有就其服务如何使用个人数据提供令人满意的解释。因此,缺乏连贯、详细和透明的解释阻碍了用户做出正确和安全的决定。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Investigating Google dashboard's explainability to support individual privacy decision making
Advances in information technology often overwhelm users with complex privacy and security decisions. They make the collection and use of personal data quite invisible. In the current scenario, this data collection can introduce risks, manipulate and influence the decision making process. This research is based on concepts from an emerging field of study called Human Data Interaction (HDI), which proposes to include the human at the center of the data stream, providing mechanisms for citizens to interact explicitly with the collected data. We explored the explanation as a promising mechanism for transparency in automated systems. In the first step, we apply the Semiotic Inspection Method (SIM) longitudinally to investigate how using explanations as an interactive feature can help or prevent users from making privacy decisions on Google services. In the second step, we conducted an empirical study in which users are able to analyze whether these explanations are satisfactory and feel (un) secure in the decision making process. And by comparing the results of the two steps, we find that even in a large company like Google, the right to explanation is not guaranteed. Google does not make its data processing transparent to users, nor does it provide satisfactory explanations of how its services use individual data. Consequently, the lack of coherent, detailed and transparent explanations hamper users to make good and safe decisions.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信