{"title":"Building a TaaS Platform for Web Service Load Testing","authors":"Minzhi Yan, Hailong Sun, Xu Wang, Xudong Liu","doi":"10.1109/CLUSTER.2012.20","DOIUrl":"https://doi.org/10.1109/CLUSTER.2012.20","url":null,"abstract":"Web services are widely known as the building blocks of typical service oriented applications. The performance of such an application system is mainly dependent on that of component web services. Thus the effective load testing of web services is of great importance to understand and improve the performance of a service oriented system. However, existing Web Service load testing tools ignore the real characteristics of the practical running environment of a web service, which leads to inaccurate test results. In this work, we present WS-TaaS, a load testing platform for web services, which enables load testing process to be as close as possible to the real running scenarios. In this way, we aim at providing testers with more accurate performance testing results than existing tools. WS-TaaS is developed on the basis of our existing Cloud PaaS platform: Service4All. First, we provide detailed analysis of the requirements of Web Service load testing and present the conceptual architecture and design of key components. Then we present the implementation details of WS-TaaS on the basis of Service4All. Finally, we perform a set of experiments based on the testing of real web services, and the experiments illustrate that WS-TaaS can efficiently facilitate the whole process of Web Service load testing.","PeriodicalId":143579,"journal":{"name":"2012 IEEE International Conference on Cluster Computing","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134502439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Youwei Wang, Jiang Zhou, Can Ma, Weiping Wang, Dan Meng, Jason Kei
{"title":"Clover: A Distributed File System of Expandable Metadata Service Derived from HDFS","authors":"Youwei Wang, Jiang Zhou, Can Ma, Weiping Wang, Dan Meng, Jason Kei","doi":"10.1109/CLUSTER.2012.54","DOIUrl":"https://doi.org/10.1109/CLUSTER.2012.54","url":null,"abstract":"To store and manage data efficiently is the critical issue which modern information infrastructures confront with. To accommodate the massive scale of data in the Internet environment, most common solutions utilize distributed file systems. However there still exist disadvantages preventing these systems from delivering satisfying performance. In this paper, we present a Name Node cluster file system based on HDFS, which is named Clover. This file system exploits two critical features: an improved 2PC protocol which ensures consistent metadata update on multiple metadata servers and a shared storage pool which provides robust persistent metadata storage and supports the operation of distributed transactions. Clover is compared with HDFS and its key virtues are shown. Further experimental results show our system can achieve better metadata expandability ranging from 10% to 90% by quantized metrics when each extra server is added, while preserving similar I/O performance.","PeriodicalId":143579,"journal":{"name":"2012 IEEE International Conference on Cluster Computing","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127457428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cairong Yan, Ming Zhu, Xin Yang, Ze Yu, Min Li, Youqun Shi, Xiaolin Li
{"title":"Affinity-aware Virtual Cluster Optimization for MapReduce Applications","authors":"Cairong Yan, Ming Zhu, Xin Yang, Ze Yu, Min Li, Youqun Shi, Xiaolin Li","doi":"10.1109/CLUSTER.2012.13","DOIUrl":"https://doi.org/10.1109/CLUSTER.2012.13","url":null,"abstract":"Infrastructure-as-a-Service clouds are becoming ubiquitous for provisioning virtual machines on demand. Cloud service providers expect to use least resources to deliver best services. As users frequently request virtual machines to build virtual clusters and run MapReduce-like jobs for big data processing, cloud service providers intend to place virtual machines closely to minimize network latency and subsequently reduce data movement cost. In this paper we focus on the virtual machine placement issue for provisioning virtual clusters with minimum network latency in clouds. We define distance as the latency between virtual machines and use it to measure the affinity of virtual clusters. Such metric of distance indicates the considerations of virtual machine placement and topology of physical nodes in clouds. Then we formulate our problem as the classical shortest distance problem and solve it by modeling to integer programming problem. A greedy virtual machine placement algorithm is designed to get a compact virtual cluster. Furthermore, an improved heuristic algorithm is also presented for achieving a global resource optimization. The simulation results verify our algorithms and the experiment results validate the improvement achieved by our approaches.","PeriodicalId":143579,"journal":{"name":"2012 IEEE International Conference on Cluster Computing","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116990221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Community Clustering for Distributed Publish/Subscribe Systems","authors":"Wei Li, Songlin Hu, Jintao Li, H. Jacobsen","doi":"10.1109/CLUSTER.2012.67","DOIUrl":"https://doi.org/10.1109/CLUSTER.2012.67","url":null,"abstract":"Optimized placement of clients in a distributed publish/subscribe system is an important technique to improve overall system efficiency. Current methods, like interest clustering or publisher placement, treat a client as, either a pure publisher, or subscriber, but not as both. Also, the cost of client movement is usually ignored. However, many applications based on publish/subscribe systems model clients as publisher and subscriber at the same time, which breaks the assumptions made by current approaches. Considering the complex dependency among clients, we propose a new community-oriented clustering approach, based on the forming of client clusters that exhibit intense communication relationships, while keeping client movement cost low. The evaluation based on a public data set shows that our method is efficient, adapts to different settings of experimental conditions, and wins over the popular interest clustering approach with respect to number of messages sent, propagation hop count and end-to-end latency.","PeriodicalId":143579,"journal":{"name":"2012 IEEE International Conference on Cluster Computing","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121405536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhiyang Ding, Xunfei Jiang, Shu Yin, X. Qin, Kai-Hsiung Chang, X. Ruan, Mohammed I. Alghamdi, Meikang Qiu
{"title":"Multicore-Enabled Smart Storage for Clusters","authors":"Zhiyang Ding, Xunfei Jiang, Shu Yin, X. Qin, Kai-Hsiung Chang, X. Ruan, Mohammed I. Alghamdi, Meikang Qiu","doi":"10.1109/CLUSTER.2012.70","DOIUrl":"https://doi.org/10.1109/CLUSTER.2012.70","url":null,"abstract":"We present a multicore-enabled smart storage for clusters in general and MapReduce clusters in particular. The goal of this research is to improve performance of data-intensive parallel applications on clusters by offloading data processing to multicore processors in storage nodes. Compared with traditional storage devices, next-generation disks will have computing capability to reduce computational load of host processors or CPUs. With the advance of processor and memory technologies, smart storage systems are promising devices to perform complex on-disk operations. The proposed smart storage system can avoid moving a huge amount of data back and forth between storage nodes and computing nodes in a cluster. To enhance the performance of data-intensive applications, we have designed a smart storage system called Multicore-enabled Smart Storage (McSD), in which a multicore processor is integrated in storage nodes. We have implemented a programming framework for data-intensive applications running on a computing system coupled with McSD. The programming framework aims at balancing load between computing nodes and multicore-enabled smart storage nodes. To fully utilize multicore processors in smart storage nodes, we have implemented the MapReduce model for McSDs to handle parallel computing on a cluster. A prototype of McSD has been implemented in a cluster connected by Gigabit Ethernet. Experimental results show that McSD can significantly reduce the execution times of three real-world applications - word count, string matching, and matrix multiplication. We demonstrate that the integration of multicore-enabled smart storage with MapReduce clusters is a promising approach to improving overall performance of data-intensive applications on clusters.","PeriodicalId":143579,"journal":{"name":"2012 IEEE International Conference on Cluster Computing","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132924913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chentao Wu, Xubin He, Jizhong Han, Huailiang Tan, C. Xie
{"title":"SDM: A Stripe-Based Data Migration Scheme to Improve the Scalability of RAID-6","authors":"Chentao Wu, Xubin He, Jizhong Han, Huailiang Tan, C. Xie","doi":"10.1109/CLUSTER.2012.24","DOIUrl":"https://doi.org/10.1109/CLUSTER.2012.24","url":null,"abstract":"In large scale data storage systems, RAID-6 has received more attention due to its capability to tolerate concurrent failures of any two disks, providing a higher level of reliability. However, a challenging issue is its scalability, or how to efficiently expand the disks. The main reason causing this problem is the typical fault tolerant scheme of most RAID-6 systems known as Maximum Distance Separable (MDS) codes, which offer data protection against disk failures with optimal storage efficiency but they are difficult to scale. To address this issue, we propose a novel Stripe-based Data Migration (SDM) scheme for large scale storage systems based on RAID-6 to achieve higher scalability. SDM is a stripe-level scheme, and the basic idea of SDM is optimizing data movements according to the future parity layout, which minimizes the overhead of data migration and parity modification. SDM scheme also provides uniform data distribution, fast data addressing and migration. We have conducted extensive mathematical analysis of applying SDM to various popular RAID-6 coding methods such as RDP, P-Code, H-Code, HDP, X-Code, and EVENODD. The results show that, compared to existing scaling approaches, SDM decreases more than 72.7% migration I/O operations and saves the migration time by up to 96.9%, which speeds up the scaling process by a factor of up to 32.","PeriodicalId":143579,"journal":{"name":"2012 IEEE International Conference on Cluster Computing","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133877504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A GPU-accelerated Branch-and-Bound Algorithm for the Flow-Shop Scheduling Problem","authors":"N. Melab, Imen Chakroun, M. Mezmaz, D. Tuyttens","doi":"10.1109/CLUSTER.2012.18","DOIUrl":"https://doi.org/10.1109/CLUSTER.2012.18","url":null,"abstract":"Branch-and-Bound (B&B) algorithms are time-intensive tree-based exploration methods for solving to optimality combinatorial optimization problems. In this paper, we investigate the use of GPU computing as a major complementary way to speed up those methods. The focus is put on the bounding mechanism of B&B algorithms, which is the most time consuming part of their exploration process. We propose a parallel B&B algorithm based on a GPU-accelerated bounding model. The proposed approach concentrate on optimizing data access management to further improve the performance of the bounding mechanism which uses large and intermediate data sets that do not completely fit in GPU memory. Extensive experiments of the contribution have been carried out on well-known FSP benchmarks using an Nvidia Tesla C2050 GPU card. We compared the obtained performances to a single and a multithreaded CPU-based execution. Accelerations up to X100 are achieved for large problem instances.","PeriodicalId":143579,"journal":{"name":"2012 IEEE International Conference on Cluster Computing","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127732147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}