STRIDE: A Secure Framework for Modeling Trust-Privacy Tradeoffs in Distributed Computing Environments
R. Deghaili, A. Chehab, A. Kayssi, W. Itani
{"title":"STRIDE: A Secure Framework for Modeling Trust-Privacy Tradeoffs in Distributed Computing Environments","authors":"R. Deghaili, A. Chehab, A. Kayssi, W. Itani","doi":"10.4018/jdtis.2010010104","DOIUrl":null,"url":null,"abstract":"This paper presents STRIDE: a Secure framework for modeling Trust-pRIvacy tradDEoffs in distributed computing environments. STRIDE aims at achieving the right privacy-trust tradeoff among distributed systems entities. This is done by establishing a set of secure mechanisms for quantifying the privacy loss and the corresponding trust gain required by a given network transaction. The privacy-trust quantification process allows the service requestor and provider to create the required trust levels necessary for executing the transaction while minimizing the privacy loss incurred. Moreover, STRIDE supports communication anonymity by associating each communicating entity with an administrative group. In this way, the identification information of the communicating entities is anonymously masked by the identification of their respective groups. The confidentiality, authenticity and integrity of data communication are ensured using appropriate cryptographic mechanisms. Moreover, data sent between groups is saved from dissemination by a self-destruction process. STRIDE provides a context-aware model supporting agents with various privacy-trust characteristics and behaviors. The system is implemented on the Java-based Aglets platform. DOI: 10.4018/jdtis.2010010104 International Journal of Dependable and Trustworthy Information Systems, 1(1), 60-81, January-March 2010 61 Copyright © 2010, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. dentials from the other entity before executing any transaction. Knowledge can be based on observations, recommendations or reputation. However, knowledge is not only related to the concept of “trusting an entity”. Another concept, which is tightly related to knowledge and trust, is privacy. Trust and privacy are two conflicting concepts. This is due to the fact that the more knowledge an entity acquires about a second entity, the more accurate the trustworthiness would be. But, more knowledge about an entity implies less privacy left to that entity. Since both trust and privacy are essential elements in a well-functioning environment, this conflict should be properly addressed. In this paper we present STRIDE, a secure framework for modeling trust-privacy tradeoffs in distributed computing environments. STRIDE employs a set of quantification mechanisms to model privacy loss and trust gain in order to determine the right tradeoff between them. A general framework is developed to select the set of information that minimizes the privacy loss for a required trust gain. STRIDE supports communication anonymity, confidentiality, authentication, and integrity and prevents private data dissemination by employing a self-destruction process. Moreover, STRIDE provides a context-aware model supporting agents with various privacy-trust characteristics and behaviors. The system is implemented on the Java-based Aglets platform. Simulation results prove that entities requesting a service tend to incur higher privacy losses when their past experiences are not reputable, when they exhibit an open behavior in revealing their private data, or when the service provider is paranoiac in nature. In all cases, the privacy loss is controlled and quantified. The rest of this paper is organized as follows: Section II presents a literature survey of the main protocols related to the proposed work. Section III describes the trust-privacy tradeoff model design and architecture. Section IV discusses the simulation results obtained when testing the trust-privacy tradeoff system on a simulated network using the Aglets platform. Conclusions are presented in Section V.","PeriodicalId":298071,"journal":{"name":"Int. J. Dependable Trust. Inf. Syst.","volume":"21 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Int. J. Dependable Trust. Inf. Syst.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.4018/jdtis.2010010104","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
This paper presents STRIDE: a Secure framework for modeling Trust-pRIvacy tradDEoffs in distributed computing environments. STRIDE aims at achieving the right privacy-trust tradeoff among distributed systems entities. This is done by establishing a set of secure mechanisms for quantifying the privacy loss and the corresponding trust gain required by a given network transaction. The privacy-trust quantification process allows the service requestor and provider to create the required trust levels necessary for executing the transaction while minimizing the privacy loss incurred. Moreover, STRIDE supports communication anonymity by associating each communicating entity with an administrative group. In this way, the identification information of the communicating entities is anonymously masked by the identification of their respective groups. The confidentiality, authenticity and integrity of data communication are ensured using appropriate cryptographic mechanisms. Moreover, data sent between groups is saved from dissemination by a self-destruction process. STRIDE provides a context-aware model supporting agents with various privacy-trust characteristics and behaviors. The system is implemented on the Java-based Aglets platform. DOI: 10.4018/jdtis.2010010104 International Journal of Dependable and Trustworthy Information Systems, 1(1), 60-81, January-March 2010 61 Copyright © 2010, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. dentials from the other entity before executing any transaction. Knowledge can be based on observations, recommendations or reputation. However, knowledge is not only related to the concept of “trusting an entity”. Another concept, which is tightly related to knowledge and trust, is privacy. Trust and privacy are two conflicting concepts. This is due to the fact that the more knowledge an entity acquires about a second entity, the more accurate the trustworthiness would be. But, more knowledge about an entity implies less privacy left to that entity. Since both trust and privacy are essential elements in a well-functioning environment, this conflict should be properly addressed. In this paper we present STRIDE, a secure framework for modeling trust-privacy tradeoffs in distributed computing environments. STRIDE employs a set of quantification mechanisms to model privacy loss and trust gain in order to determine the right tradeoff between them. A general framework is developed to select the set of information that minimizes the privacy loss for a required trust gain. STRIDE supports communication anonymity, confidentiality, authentication, and integrity and prevents private data dissemination by employing a self-destruction process. Moreover, STRIDE provides a context-aware model supporting agents with various privacy-trust characteristics and behaviors. The system is implemented on the Java-based Aglets platform. Simulation results prove that entities requesting a service tend to incur higher privacy losses when their past experiences are not reputable, when they exhibit an open behavior in revealing their private data, or when the service provider is paranoiac in nature. In all cases, the privacy loss is controlled and quantified. The rest of this paper is organized as follows: Section II presents a literature survey of the main protocols related to the proposed work. Section III describes the trust-privacy tradeoff model design and architecture. Section IV discusses the simulation results obtained when testing the trust-privacy tradeoff system on a simulated network using the Aglets platform. Conclusions are presented in Section V.
STRIDE:分布式计算环境中信任-隐私权衡建模的安全框架
本文提出了STRIDE:一个用于在分布式计算环境中建立信任-隐私权衡模型的安全框架。STRIDE的目标是在分布式系统实体之间实现正确的隐私-信任权衡。这是通过建立一套安全机制来量化给定网络事务所需的隐私损失和相应的信任收益来实现的。隐私-信任量化流程允许服务请求者和提供者创建执行事务所需的信任级别,同时最大限度地减少所发生的隐私损失。此外,STRIDE通过将每个通信实体与管理组关联来支持通信匿名性。这样,通信实体的标识信息就被它们各自组的标识匿名地掩盖了。采用适当的加密机制确保数据通信的机密性、真实性和完整性。此外,组间发送的数据通过自毁过程得以避免传播。STRIDE提供了一个上下文感知模型,支持具有各种隐私信任特征和行为的代理。系统是在基于java的Aglets平台上实现的。DOI: 10.4018 / jdtis.2010010104国际可靠与可信信息系统学报,1(1),60-81,2010年1月61版权所有©2010,IGI Global。未经IGI Global书面许可,禁止以印刷或电子形式复制或分发。在执行任何事务之前,来自其他实体的凭证。知识可以基于观察、建议或声誉。然而,知识并不仅仅与“信任一个实体”的概念有关。另一个与知识和信任密切相关的概念是隐私。信任和隐私是两个相互冲突的概念。这是因为一个实体获得的关于另一个实体的知识越多,可信度就越准确。但是,对一个实体的了解越多,意味着留给该实体的隐私就越少。由于信任和隐私都是运转良好的环境中的基本要素,因此应该妥善处理这一冲突。在本文中,我们提出了STRIDE,一个用于在分布式计算环境中建模信任-隐私权衡的安全框架。STRIDE采用了一套量化机制来模拟隐私损失和信任收益,以确定它们之间的正确权衡。开发了一个通用框架来选择一组信息,以最小化隐私损失以获得所需的信任。STRIDE支持通信匿名性、保密性、身份验证和完整性,并通过采用自毁过程防止私有数据传播。此外,STRIDE提供了一个上下文感知模型,支持具有各种隐私信任特征和行为的代理。系统是在基于java的Aglets平台上实现的。仿真结果证明,当请求服务的实体过去的经历不可靠,当他们在透露自己的私人数据时表现出公开的行为,或者当服务提供者本质上是偏执狂时,请求服务的实体往往会遭受更高的隐私损失。在所有情况下,隐私损失都是可控和量化的。本文的其余部分组织如下:第二节介绍了与拟议工作相关的主要协议的文献综述。第三节描述了信任-隐私权衡模型的设计和架构。第四节讨论了使用Aglets平台在模拟网络上测试信任-隐私权衡系统时获得的仿真结果。结论载于第五节。
本文章由计算机程序翻译,如有差异,请以英文原文为准。