{"title":"Towards Secure and Effective Utilization over Encrypted Cloud Data","authors":"Cong Wang, Qian Wang, K. Ren","doi":"10.1109/ICDCSW.2011.16","DOIUrl":"https://doi.org/10.1109/ICDCSW.2011.16","url":null,"abstract":"Cloud computing enables an economic paradigm of data service outsourcing, where individuals and enterprise customers can avoid committing large capital outlays in the purchase and management of both software and hardware and the operational overhead therein. Despite the tremendous benefits, outsourcing data management to the commercial public cloud is also depriving customers' direct control over the systems that manage their data, raising security and privacy as the primary obstacles to the adoption of cloud. Although data encryption helps protecting data confidentiality, it also obsoletes the traditional data utilization service based on plain text keyword search. Thus, enabling an encrypted cloud data search service with privacy-assurance is of paramount importance. Considering the large number of data users and huge amount of outsourced data files in cloud, this problem is particularly challenging as it is extremely difficult to meet also the practical requirements of performance, system usability, and high-level user searching experiences. This paper investigates these challenges and defines the problem of fuzzy keyword search over encrypted cloud data, which should be explored for effective data utilization in Cloud Computing. Fuzzy keyword search aims at accommodating various typos and representation inconsistencies in different user searching input for acceptable system usability and overall user searching experience, while protecting keyword privacy. In order to further enrich the spectrum of secure cloud data utilization services, we also study how the notion of fuzzy search naturally supports similarity search, a fundamental and powerful tool that is widely used in information retrieval. We describe the challenges that are not yet met by existing searchable encryption techniques and discuss the research directions and possible technical approaches for these new search functionalities to become a reality. The investigation of the proposed research can become the key for cloud service providers to securely and effectively deliver value from the cloud infrastructure to their enterprise and individual customers, and thus significantly encourage the adoption of Cloud Computing in a large scale.","PeriodicalId":133514,"journal":{"name":"2011 31st International Conference on Distributed Computing Systems Workshops","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126230306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Broadcasting Method Based on Topology Control for Fault-Tolerant MANET","authors":"D. Kasamatsu, Yuta Kawamura, M. Oki, N. Shinomiya","doi":"10.1109/ICDCSW.2011.40","DOIUrl":"https://doi.org/10.1109/ICDCSW.2011.40","url":null,"abstract":"Mobile ad hoc networks (MANETs) have the disadvantage of decreased network reliability due to the mobility of intermediate terminals, unstable wireless links, and battery exhaustion. The reliability of networks is characterized by their level of k-connectivity. The problem optimizing a network lifetime by minimizing power consumption at a given k-connectivity is called the transmission-power assignment problem (TPAP). There are several conventional approaches to solve this problem. However, a performance evaluation of mobile networks has not yet been conducted. This paper proposes a broadcasting method based on topology control for MANETs with the aim of achieving dual objectives of reducing power consumption and ensuring an acceptable reliability level. Simulation results show that the proposed method can ensure high network reliability and reduce power consumption.","PeriodicalId":133514,"journal":{"name":"2011 31st International Conference on Distributed Computing Systems Workshops","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121584744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Private Editing Using Untrusted Cloud Services","authors":"Yan Huang, David Evans","doi":"10.1109/ICDCSW.2011.36","DOIUrl":"https://doi.org/10.1109/ICDCSW.2011.36","url":null,"abstract":"We present a general methodology for protecting the confidentiality and integrity of user data for a class of on-line editing applications. The key insight is that many of these applications are designed to perform most of their data-dependent computation on the client side, so it is possible to maintain their functionality while only exposing an encrypted version of the document to the server. We apply our methodology to Google Documents and describe a prototype extension tool that enables users to use a cloud application to manage their documents without sacrificing confidentiality or integrity. To provide adequate performance, we employ an incremental encryption scheme and extend it to support variable-length blocks. We analyze the security of our scheme and report on experiments that show our extension preserves most of the cloud application's functionality with less than 10% overhead for typical use.","PeriodicalId":133514,"journal":{"name":"2011 31st International Conference on Distributed Computing Systems Workshops","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126280296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Model of Storage I/O Performance Interference in Virtualized Systems","authors":"G. Casale, Stephan Kraft, Diwakar Krishnamurthy","doi":"10.1109/ICDCSW.2011.46","DOIUrl":"https://doi.org/10.1109/ICDCSW.2011.46","url":null,"abstract":"In this paper, we propose simple performance models to predict the impact of consolidation on the storage I/O performance of virtualized applications. We use a measurement-based approach based on tools such as blktrace and tshark for storage workload characterization in a commercial virtualized solution, namely VMware ESX server. Our approach allows a distinct characterization of read/write performance attributes on a per request level and provides valuable information for parameterization of storage I/O performance models. In particular, based on measures of quantities such as the mean queue-length seen upon arrival by requests, we define simple linear prediction models for the throughput, response times, and mix of read/write requests in consolidation based only on information collected in isolation experiments for the individual virtual machines.","PeriodicalId":133514,"journal":{"name":"2011 31st International Conference on Distributed Computing Systems Workshops","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130595495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Probabilistic Approach to Address TCP Incast in Data Center Networks","authors":"S. Kulkarni, P. Agrawal","doi":"10.1109/ICDCSW.2011.41","DOIUrl":"https://doi.org/10.1109/ICDCSW.2011.41","url":null,"abstract":"Data centers typically host tens of thousands of servers that communicate with each other using high speed network interconnects. While these servers help in servicing millions of clients, their overall performance largely depends on the efficiency of the center's communication fabric. Cost and compatibility reasons however, persuade many data centers to consider Ethernet for their baseline communication fabric. Until recently, Ethernet speeds inside data centers averaged around 100Mbps but the evolution of IEEE 802.3 standards has led to the development of 1 Gbps and 10 Gbps Ethernet. This sudden jump in Ethernet speeds requires proportional scaling of TCP/IP processing for network intensive applications to really benefit from the increased bandwidth. While IP is expected to scale well in this context, TCP is known to have problems supporting very high data rates at very low latencies. One such problem, termed the `Incast', results in gross under-utilization of link capacity in certain many-to-one TCP communication patterns. This paper presents a practical solution to TCP's incast problem. Our proposed technique relies on a probabilistic approach that augments TCP's standard congestion recovery mechanism. Simulation results demonstrate that this technique is effective in avoiding TCP throughput collapse in data center networks.","PeriodicalId":133514,"journal":{"name":"2011 31st International Conference on Distributed Computing Systems Workshops","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130845593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Connected Point Coverage in Wireless Sensor Networks Using Robust Spanning Trees","authors":"P. Ostovari, M. Dehghan, Jie Wu","doi":"10.1109/ICDCSW.2011.47","DOIUrl":"https://doi.org/10.1109/ICDCSW.2011.47","url":null,"abstract":"Energy limitation is one of the most critical challenges in the area of sensor networks. Sleep scheduling mechanisms can reduce the energy consumption. Coverage mechanisms attempt to cover the area with the minimum possible number of sensors. There are many area coverage approaches which also consider the connectivity problem. However, in the area of point coverage, there are limited mechanisms that maintain connectivity. In this paper, we propose a point coverage mechanism and two connectivity mechanisms. We compare these mechanisms to one of the best methods that consider both point coverage and connectivity. In the point coverage mechanism, we present a method for computing the waiting time, which reduces the number of the required sensors. For preserving the connectivity, virtual robust spanning tree (VRST) and modified virtual robust spanning tree (MVRST) are proposed. These mechanisms are based on making a virtual spanning tree and converting this tree to a physical tree. In order to spread out sensed data to the sink from different paths and decrease the loss probability, instead of using a minimum spanning tree (MST) to connect nodes to the sink, we use a combination of distance of nodes and number of hops to select edges and construct the tree. The simulation results show that the proposed coverage method reduces energy consumption by up to 7% compared to the Cardei method. The VRST and MVRST use more energy than the Cardei method, but the average data loss decreases by up to 40%. Moreover, VRST and MVRST have less depth and data latency.","PeriodicalId":133514,"journal":{"name":"2011 31st International Conference on Distributed Computing Systems Workshops","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121931165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Scaling OS Streaming through Minimizing Cache Redundancy","authors":"L. Garcés-Erice, S. Rooney","doi":"10.1109/ICDCSW.2011.35","DOIUrl":"https://doi.org/10.1109/ICDCSW.2011.35","url":null,"abstract":"OS Streaming is a common data center technique for deploying an OS image quickly onto a physical or virtual machine in which the machine requests the individual blocks of the image from a server as it needs them. When streaming images the server's OS level block cache brings very little in terms of performance as the collection of images is usually too large to fit in memory. We investigate how to improve the scalability of streaming servers by ensuring that blocks {it shared} among multiple streamed images are preferentially retained in a deduplicated cache. We outline the nature of our deduplicating block cache, describing how cacheable blocks are identified during an off line deduplication process and how an extended form of the Least Recently Used (LRU) block replacement algorithm can be used within the server cache.","PeriodicalId":133514,"journal":{"name":"2011 31st International Conference on Distributed Computing Systems Workshops","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126611020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Mitigating Wireless Jamming Attacks via Channel Migration","authors":"Sangwon Hyun, P. Ning, An Liu","doi":"10.1109/ICDCSW.2011.32","DOIUrl":"https://doi.org/10.1109/ICDCSW.2011.32","url":null,"abstract":"This paper presents the design of a channel migration scheme to mitigate wireless jamming attacks. By exploiting the multiple wireless channels typically available on most wireless platforms, our scheme uses a flexible and resilient approach to switch communication channels, which enables wireless nodes to continue communication with their neighbors in the presence of jamming attacks. A nice property of the proposed scheme is that it does not depend on any single, fixed channel and executes in a decentralized and independent way on each wireless node. To investigate the effectiveness of the proposed channel migration scheme, we apply the scheme to Seluge, a secure code dissemination system for wireless sensor networks, and evaluate the resulting protocol through both theoretical analysis and experimental evaluation in a testbed of 72 MicaZ motes. Both results indicate that our channel migration scheme can effectively and efficiently mitigate jamming attacks.","PeriodicalId":133514,"journal":{"name":"2011 31st International Conference on Distributed Computing Systems Workshops","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117030736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Iskander, D. Wilkinson, Adam J. Lee, Panos K. Chrysanthis
{"title":"Enforcing Policy and Data Consistency of Cloud Transactions","authors":"M. Iskander, D. Wilkinson, Adam J. Lee, Panos K. Chrysanthis","doi":"10.1109/ICDCSW.2011.42","DOIUrl":"https://doi.org/10.1109/ICDCSW.2011.42","url":null,"abstract":"In distributed transactional database systems deployed over cloud servers, entities cooperate to form proofs of authorizations that are justified by collections of certified credentials. These proofs and credentials may be evaluated and collected over extended time periods under the risk of having the underlying authorization policies or the user credentials being in inconsistent states. It therefore becomes possible for a policy-based authorization systems to make unsafe decisions that might threaten sensitive resources. In this paper, we highlight the criticality of the problem. We then present the first formalization of the concept of trusted transactions when dealing with proofs of authorizations. Accordingly, we define different levels of policy consistency constraints and present different enforcement approaches to guarantee the trustworthiness of transactions executing on cloud servers. We propose a Two-Phase Validation Commit protocol as a solution, that is a modified version of the basic Two-Phase Commit protocols. We finally provide performance analysis of the different presented approaches to guide the decision makers in which approach to use.","PeriodicalId":133514,"journal":{"name":"2011 31st International Conference on Distributed Computing Systems Workshops","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131446335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Energy Efficiency Assessment for Data Center in Finland: Case Study","authors":"Xiaoshu Lu, T. Lu, M. Remes, M. Viljanen","doi":"10.1109/ICDCSW.2011.29","DOIUrl":"https://doi.org/10.1109/ICDCSW.2011.29","url":null,"abstract":"As the data-driven economy grows, we are facing unprecedented challenges of improving energy efficiency in data centers. Minimising the cooling energy demand in data centers is one of the main objectives. This paper investigates overall energy consumption and the energy efficiency of cooling system for a data center in Finland as a case study. The temporal energy consumption characteristics, cooling infrastructure and operation of the data center are analysed. The main problems about cooling energy efficiency and the factors that may contribute toward higher efficiency are identified and further suggestions are put forward. Results are presented of an extensive evaluation of the energy performance of the study data center with a view to energy recovery. The conclusion we can draw is that even though the analysed data center demonstrated relatively high energy efficiency, based on its power usage effectiveness value, there is still energy saving potential.","PeriodicalId":133514,"journal":{"name":"2011 31st International Conference on Distributed Computing Systems Workshops","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133024226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}