Elli Androulaki, Claudio Soriente, Luka Malisa, Srdjan Capkun
{"title":"Enforcing Location and Time-Based Access Control on Cloud-Stored Data","authors":"Elli Androulaki, Claudio Soriente, Luka Malisa, Srdjan Capkun","doi":"10.1109/ICDCS.2014.71","DOIUrl":"https://doi.org/10.1109/ICDCS.2014.71","url":null,"abstract":"Recent incidents of data-breaches from the cloud suggest that users should not trust the cloud provider to enforce access control on their data. We focus on mitigating trust to the cloud in scenarios where granting access to data not only considers user identities (as in conventional access policies), but also contextual information such as the user's location and time of access. Previous work in this context assumes a fully trusted cloud that is further capable of locating users. We introduce LoTAC, a novel framework that seamlessly integrates the operation of a cloud provider and a localization infrastructure to enforce location- and time-based access control to cloud-stored data. In LoTAC, the two entities operate independently and are only trusted to offer their basic services: the cloud provider is used and trusted only to reliably store data, the localization infrastructure is used and trusted only to accurately locate users. Furthermore, neither the cloud provider nor the localization infrastructure can access the data, even if they collude. LoTAC protocols require no changes to the cloud provider and minimal changes to the localization infrastructure. We evaluate our protocols using a cellular network as the localization infrastructure and show that they incur in low communication and computation costs and scale well with a large number of users and policies.","PeriodicalId":170186,"journal":{"name":"2014 IEEE 34th International Conference on Distributed Computing Systems","volume":"1993 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128629318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Compiler Driven Automatic Kernel Context Migration for Heterogeneous Computing","authors":"Ramy Gad, Tim Süß, A. Brinkmann","doi":"10.1109/ICDCS.2014.47","DOIUrl":"https://doi.org/10.1109/ICDCS.2014.47","url":null,"abstract":"Computer systems provide different heterogeneous resources (e.g., GPUs, DSPs and FPGAs) that accelerate applications and that can reduce the energy consumption by using them. Usually, these resources have an isolated memory and a require target specific code to be written. There exist tools that can automatically generate target specific codes for program parts, so-called kernels. The data objects required for a target kernel execution need to be moved to the target resource memory. It is the programmers' responsibility to serialize these data objects used in the kernel and to copy them to or from the resource's memory. Typically, the programmer writes his own serializing function or uses existing serialization libraries. Unfortunately, both approaches require code modifications, and the programmer needs knowledge of the used data structure format. There is a need for a tool that is able to automatically extract the original kernel data objects, serialize them, and migrate them to a target resource without requiring intervention from the programmer. In this paper, we present a tool collection ConSerner that automatically identifies, gathers, and serializes the context of a kernel and migrates it to a target resource's memory where a target specific kernel is executed with this data. This is all done transparently to the programmer. Complex data structures can be used without making a modification of the program code by a programmer necessary. Predefined data structures in external libraries (e.g., the STL's vector) can also be used as long as the source code of these libraries is available.","PeriodicalId":170186,"journal":{"name":"2014 IEEE 34th International Conference on Distributed Computing Systems","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131757808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Y. Tang, Ling Liu, A. Iyengar, Kisung Lee, Qi Zhang
{"title":"e-PPI: Locator Service in Information Networks with Personalized Privacy Preservation","authors":"Y. Tang, Ling Liu, A. Iyengar, Kisung Lee, Qi Zhang","doi":"10.1109/ICDCS.2014.27","DOIUrl":"https://doi.org/10.1109/ICDCS.2014.27","url":null,"abstract":"In emerging information networks, having a privacy preserving index (or PPI) is critically important for locating information of interest for data sharing across autonomous providers while preserving privacy. An understudied problem for PPI techniques is how to provide controllable privacy preservation, given the innate difference of privacy concerns regarding different data owners. In this paper we present a personalized privacy preserving index, coined ε-PPI, which guarantees quantitative privacy preservation differentiated by personal identities. We devise a new common-identity attack that breaks existing PPI's and propose an identity-mixing protocol against the attack in ε-PPI. The proposed ε-PPI construction protocol is the first without any trusted third party and/or trust relationships between providers. We have implemented our ε-PPI construction protocol by using generic MPC techniques (secure multi-party computation) and optimized the performance to a practical level by minimizing the expensive MPC part.","PeriodicalId":170186,"journal":{"name":"2014 IEEE 34th International Conference on Distributed Computing Systems","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127881710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Propeller: A Scalable Real-Time File-Search Service in Distributed Systems","authors":"Lei Xu, Hong Jiang, Lei Tian, Ziling Huang","doi":"10.1109/ICDCS.2014.46","DOIUrl":"https://doi.org/10.1109/ICDCS.2014.46","url":null,"abstract":"File-search service is a valuable facility to accelerate many analytics applications, because it can drastically reduce the scale of the input data. The main challenge facing the design of large-scale and accurate file-search services is how to support real-time indexing in an efficient and scalable way. To address this challenge, we propose a distributed file-search service, called Propeller, which utilizes a special file-access pattern, called access-causality, to partition file-indices in order to expose substantial access locality and parallelism to accelerate the file-indexing process. The extensive evaluations of Propeller show that it is real-time in file-indexing operations, accurate in file-search results, and scalable in large datasets. It achieves significantly better file-indexing and file-search performance (up to 250x) than a centralized solution (MySQL) and much higher accuracy and substantially lower query latency (up to 22x than a state-of-the-art desktop search engine (Spotlight).","PeriodicalId":170186,"journal":{"name":"2014 IEEE 34th International Conference on Distributed Computing Systems","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121160208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Bandwidth Guarantee under Demand Uncertainty in Multi-tenant Clouds","authors":"Lei Yu, Haiying Shen","doi":"10.1109/ICDCS.2014.34","DOIUrl":"https://doi.org/10.1109/ICDCS.2014.34","url":null,"abstract":"The shared multi-tenant nature of cloud network infrastructures has caused poor application performance in the clouds due to unpredictable network performance. To provide bandwidth guarantee, several virtual network abstractions have been proposed which allow the tenants to specify and reserve virtual clusters with required network bandwidth between the VMs. However, all of these existing proposals require the tenants to deterministically characterize the exact bandwidth demands in the abstractions, which can be difficult and result in inefficient bandwidth reservation due to the demand uncertainty. In this paper, we propose a virtual cluster abstraction with stochastic bandwidth requirements between VMs, called Stochastic Virtual Cluster (SVC), which probabilistically models the bandwidth demand uncertainty. Based on SVC, we propose a network sharing framework and efficient VM allocation algorithms to ensure that the bandwidth demands of tenants on any link are satisfied with a high probability, while minimizing the bandwidth occupancy cost on links. Using simulations, we demonstrate the effectiveness of SVC for accommodating cloud application workloads with highly volatile bandwidth demands, in the way of achieving the trade-off between the job concurrency and average job running time.","PeriodicalId":170186,"journal":{"name":"2014 IEEE 34th International Conference on Distributed Computing Systems","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121483250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yanmin Zhu, Qian Zhang, Hongzi Zhu, Jiadi Yu, Jian Cao, L. Ni
{"title":"Towards Truthful Mechanisms for Mobile Crowdsourcing with Dynamic Smartphones","authors":"Yanmin Zhu, Qian Zhang, Hongzi Zhu, Jiadi Yu, Jian Cao, L. Ni","doi":"10.1109/ICDCS.2014.10","DOIUrl":"https://doi.org/10.1109/ICDCS.2014.10","url":null,"abstract":"Stimulating participation from smartphone users is of paramount importance to mobile crowd sourcing systems and applications. A few incentive mechanisms have been proposed, but most of them have made the impractical assumption that smartphones remain static in the system and sensing tasks are known in advance. The existing mechanisms fail when being applied to the realistic scenario where smartphones dynamically arrive to the system and sensing tasks are submitted at random. It is particularly challenging to design an incentive mechanism for such a mobile crowd sourcing system, given dynamic smartphones, uncertain arrivals of tasks, strategic behaviors, and private information of smartphones. We propose two truthful auction mechanisms for two different cases of mobile crowd sourcing with dynamic smartphones. For the offline case, we design an optimal truthful mechanism with an optimal task allocation algorithm of polynomial-time computation complexity of O (n+γ)3, where n is the number of smartphones and γ is the number of sensing tasks. For the online case, we design a near-optimal truthful mechanism with an online task allocation algorithm that achieves a constant competitive ratio of 1:2. Rigorous theoretical analysis and extensive simulations have been performed, and the results demonstrate the proposed auction mechanisms achieve truthfulness, individual rationality, computational efficiency, and low overpayment.","PeriodicalId":170186,"journal":{"name":"2014 IEEE 34th International Conference on Distributed Computing Systems","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116813733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"T-Storm: Traffic-Aware Online Scheduling in Storm","authors":"Jielong Xu, Zhenhua Chen, Jian Tang, Sen Su","doi":"10.1109/ICDCS.2014.61","DOIUrl":"https://doi.org/10.1109/ICDCS.2014.61","url":null,"abstract":"Storm has emerged as a promising computation platform for stream data processing. In this paper, we first show inefficiencies of the current practice of Storm scheduling and challenges associated with applying traffic-aware online scheduling in Storm via experimental results and analysis. Motivated by our observations, we design and implement a new stream data processing system based on Storm, namely, T-Storm. Compared to Storm, T-Storm has the following desirable features: 1) based on runtime states, it accelerates data processing by leveraging effective traffic-aware scheduling for assigning/re-assigning tasks dynamically, which minimizes inter-node and inter-process traffic while ensuring no worker nodes are overloaded, 2) it enables fine-grained control over worker node consolidation such that T-Storm can achieve better performance with even fewer worker nodes, 3) it allows hot-swapping of scheduling algorithms and adjustment of scheduling parameters on the fly, and 4) it is transparent to Storm users (i.e., Storm applications can be ported to run on T-Storm without any changes). We conducted real experiments in a cluster using well-known data processing applications for performance evaluation. Extensive experimental results show that compared to Storm (with the default scheduler), T-Storm can achieve over 84% and 27% speedup on lightly and heavily loaded topologies respectively (in terms of average processing time) with 30% less number of worker nodes.","PeriodicalId":170186,"journal":{"name":"2014 IEEE 34th International Conference on Distributed Computing Systems","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130811165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jieming Zhu, Pinjia He, Zibin Zheng, Michael R. Lyu
{"title":"Towards Online, Accurate, and Scalable QoS Prediction for Runtime Service Adaptation","authors":"Jieming Zhu, Pinjia He, Zibin Zheng, Michael R. Lyu","doi":"10.1109/ICDCS.2014.40","DOIUrl":"https://doi.org/10.1109/ICDCS.2014.40","url":null,"abstract":"Service-based cloud applications are typically built on component services to fulfill certain application logic. To meet quality-of-service (QoS) guarantees, these applications have to become resilient against the QoS variations of their component services. Runtime service adaptation has been recognized as a key solution to achieve this goal. To make timely and accurate adaptation decisions, effective QoS prediction is desired to obtain the QoS values of component services. However, current research has focused mostly on QoS prediction of the working services that are being used by a cloud application, but little on QoS prediction of candidate services that are also important for making adaptation decisions. To bridge this gap, in this paper, we propose a novel QoS prediction approach, namely adaptive matrix factorization (AMF), which is inspired from the collaborative filtering model used in recommender systems. Specifically, our AMF approach extends conventional matrix factorization into an online, accurate, and scalable model by employing techniques of data transformation, online learning, and adaptive weights. Comprehensive experiments have been conducted based on a real-world large-scale QoS dataset of Web services to evaluate our approach. The evaluation results provide good demonstration for our approach in achieving accuracy, efficiency, and scalability.","PeriodicalId":170186,"journal":{"name":"2014 IEEE 34th International Conference on Distributed Computing Systems","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128816835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Lock-Free Cuckoo Hashing","authors":"Nhan Nguyen, P. Tsigas","doi":"10.1109/ICDCS.2014.70","DOIUrl":"https://doi.org/10.1109/ICDCS.2014.70","url":null,"abstract":"This paper presents a lock-free cuckoo hashing algorithm, to the best of our knowledge this is the first lock-free cuckoo hashing in the literature. The algorithm allows mutating operations to operate concurrently with query ones and requires only single word compare-and-swap primitives. Query of items can operate concurrently with others mutating operations, thanks to the two-round query protocol enhanced with a logical clock technique. When an insertion triggers a sequence of key displacements, instead of locking the whole cuckoo path, our algorithm breaks down the chain of relocations into several single relocations which can be executed independently and concurrently with other operations. A fine tuned synchronization and a helping mechanism for relocation are designed. The mechanisms allow high concurrency and provide progress guarantees for the data structure's operations. Our experimental results show that our lock-free cuckoo hashing performs consistently better than two efficient lock-based hashing algorithms, the chained and the hopscotch hash-map, in different access pattern scenarios.","PeriodicalId":170186,"journal":{"name":"2014 IEEE 34th International Conference on Distributed Computing Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129771935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"BOND: Exploring Hidden Bottleneck Nodes in Large-Scale Wireless Sensor Networks","authors":"Q. Ma, Kebin Liu, Tong Zhu, Wei Gong, Yunhao Liu","doi":"10.1109/ICDCS.2014.48","DOIUrl":"https://doi.org/10.1109/ICDCS.2014.48","url":null,"abstract":"In a large-scale wireless sensor network, thousands of sensor nodes periodically generate and forward data back to the sink. In our recent outdoor deployment, we observe that some bottleneck nodes can greatly determine other nodes' data collection ratio, and thus affect the whole network performance. To figure out the importance of a node in data collection, the manager needs to understand the interactive behaviors among the parent and child nodes. To address this issue, we present a management tool BOND (Bottleneck Node Detector). We introduce the concept of Node Dependence to characterize how much a node relies on each of its parent nodes. BOND models the routing process as a Hidden Markov Model, and uses a machine learning approach to learn the state transition probabilities in this model based on the observed traces. BOND utilizes Node Dependence to explore the hidden bottleneck nodes in the network. Moreover, we can predict how adding or removing the sensor nodes would impact the data flow, thus avoid data loss and flow congestion in redeployment. We implement our tool on real hardware and deploy it in an outdoor system. Our extensive experiments show that BOND infers the Node Dependence with an average accuracy of more than 85%.","PeriodicalId":170186,"journal":{"name":"2014 IEEE 34th International Conference on Distributed Computing Systems","volume":"142 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128602181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}