Keynote: Privacy and Trust: Friend or Foe

Ling Liu
{"title":"Keynote: Privacy and Trust: Friend or Foe","authors":"Ling Liu","doi":"10.1145/3139531.3139537","DOIUrl":null,"url":null,"abstract":"Internet of Things (IoT) and Big Data have fueled the development of fully distributed computational architectures for future cyber systems from data analytics, machine learning (ML) to artificial intelligence (AI). Trust and Privacy become two vital and necessary measures for distributed management of IoT powered big data learning systems and services. However, these two measures have been studied independently in computer science, social science and law. Trust is widely considered as a critical measure for the correctness, predictability, and resiliency (with respect to reliability and security) of software systems, be it big data systems, IoT systems, machine learning systems, or Artificial Intelligence systems. Privacy on the other hand is commonly recognized as a personalization measure for imposing control on the ways of how data is captured, accessed and analyzed, and the ways of how data analytic results from ML models and AI systems should be released and shared. Broadly speaking, in human society, we rely on three types of trust in our everyday work and life to achieve a peaceful mind: (1) verifiable belief-driven trust, (2) statistical evidence based trust, and (3) complex systemwide cognitive trust. Interestingly, privacy has been a more controversial subject. On one hand, privacy is an important built-in dimension of trust, which is deep rooted in human society, and a highly valued virtue in Western civilization. Even though different human beings may have diverse levels of privacy sensitivity, we all trust that our privacy is respected in our social and professional environments, including at home, at work and in social commons. Thus, Privacy is a perfect example of three-fold trust: belief-driven, statistical evident, and complex cognitive trust. On the other hand, many view privacy (and privacy protection) as an antagonistic measure of trust and one is often asked to show trust at the cost of giving up on privacy. Are Privacy and Trust friend or foe? This keynote will share my view to this question from multiple perspectives. I conjecture that the answer to this question can fundamentally change the ways we conduct research in privacy and trust in the next generation of big data enhanced cyber learning systems from data mining, machine learning to artificial intelligence.","PeriodicalId":295031,"journal":{"name":"Proceedings of the 2017 Workshop on Women in Cyber Security","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2017 Workshop on Women in Cyber Security","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3139531.3139537","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Internet of Things (IoT) and Big Data have fueled the development of fully distributed computational architectures for future cyber systems from data analytics, machine learning (ML) to artificial intelligence (AI). Trust and Privacy become two vital and necessary measures for distributed management of IoT powered big data learning systems and services. However, these two measures have been studied independently in computer science, social science and law. Trust is widely considered as a critical measure for the correctness, predictability, and resiliency (with respect to reliability and security) of software systems, be it big data systems, IoT systems, machine learning systems, or Artificial Intelligence systems. Privacy on the other hand is commonly recognized as a personalization measure for imposing control on the ways of how data is captured, accessed and analyzed, and the ways of how data analytic results from ML models and AI systems should be released and shared. Broadly speaking, in human society, we rely on three types of trust in our everyday work and life to achieve a peaceful mind: (1) verifiable belief-driven trust, (2) statistical evidence based trust, and (3) complex systemwide cognitive trust. Interestingly, privacy has been a more controversial subject. On one hand, privacy is an important built-in dimension of trust, which is deep rooted in human society, and a highly valued virtue in Western civilization. Even though different human beings may have diverse levels of privacy sensitivity, we all trust that our privacy is respected in our social and professional environments, including at home, at work and in social commons. Thus, Privacy is a perfect example of three-fold trust: belief-driven, statistical evident, and complex cognitive trust. On the other hand, many view privacy (and privacy protection) as an antagonistic measure of trust and one is often asked to show trust at the cost of giving up on privacy. Are Privacy and Trust friend or foe? This keynote will share my view to this question from multiple perspectives. I conjecture that the answer to this question can fundamentally change the ways we conduct research in privacy and trust in the next generation of big data enhanced cyber learning systems from data mining, machine learning to artificial intelligence.
主题演讲:隐私与信任:是敌是友
物联网(IoT)和大数据推动了未来网络系统的全分布式计算架构的发展,从数据分析、机器学习(ML)到人工智能(AI)。信任和隐私成为物联网驱动的大数据学习系统和服务分布式管理的两个至关重要和必要的措施。然而,这两种措施在计算机科学、社会科学和法学中都有独立的研究。信任被广泛认为是软件系统正确性、可预测性和弹性(相对于可靠性和安全性)的关键指标,无论是大数据系统、物联网系统、机器学习系统还是人工智能系统。另一方面,隐私通常被认为是一种个性化措施,用于控制数据的捕获、访问和分析方式,以及ML模型和AI系统的数据分析结果的发布和共享方式。从广义上讲,在人类社会中,我们在日常工作和生活中依靠三种类型的信任来实现心灵的平静:(1)可验证的信念驱动的信任,(2)基于统计证据的信任,(3)复杂的系统认知信任。有趣的是,隐私一直是一个更有争议的话题。一方面,隐私是信任的重要内在维度,它深深根植于人类社会,是西方文明高度重视的美德。尽管不同的人对隐私的敏感程度可能不同,但我们都相信,在我们的社会和职业环境中,包括在家里、在工作场所和在社会公共场所,我们的隐私受到尊重。因此,隐私是三重信任的完美例子:信念驱动、统计证据和复杂的认知信任。另一方面,许多人认为隐私(和隐私保护)是信任的一种对抗措施,人们经常被要求以放弃隐私为代价来表现信任。隐私和信任是敌是友?这个主题演讲将从多个角度分享我对这个问题的看法。我猜想,这个问题的答案可以从根本上改变我们在下一代大数据增强的网络学习系统中进行隐私和信任研究的方式,从数据挖掘、机器学习到人工智能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信