{"title":"Algorithmic approach for applying load balancing during task migration in multi-core system","authors":"Suchi Johari, Arvind Kumar","doi":"10.1109/PDGC.2014.7030710","DOIUrl":"https://doi.org/10.1109/PDGC.2014.7030710","url":null,"abstract":"As day by day multi core systems are increasing researchers are looking for various techniques to reduce power consumption and improve performance. In multi-core processors several cores are placed on the single grid and multiple tasks are distributed over these cores for processing [1]. Each core is responsible for the processing of the task allocated to it. This may lead to problems like load balancing on different cores, as various tasks are allocated to the single core. Now if there is a situation that a single core or a cluster of cores have large number of task to be executed while some of the cores remain free then there is a need of a technique called task migration. Task migration is not similar to load balancing in any aspect. In task migration only the migration of the task from overloaded core to other core is done but till now maintaining load balancing during task migration was an issue[2]. But if the load balancing is done at the time of the task migration then it will improve performance about 99%. This paper emphasis on various algorithms for load balancing during task migration in a multi-core system, so that number of tasks on different cores are distributed fairly. This paper also discusses the algorithms for deciding which task is to be migrated. With the help of these algorithms, performance and response time is improved to a great extent.","PeriodicalId":311953,"journal":{"name":"2014 International Conference on Parallel, Distributed and Grid Computing","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125199501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Performance improvement of BWA MEM algorithm using data-parallel with concurrent parallelization","authors":"N. Kathiresan, M. Temanni, Rashid J. Al-Ali","doi":"10.1109/PDGC.2014.7030780","DOIUrl":"https://doi.org/10.1109/PDGC.2014.7030780","url":null,"abstract":"Burrows-Wheeler Transform (BWT) is the widely used data compression technique in the next-generation sequencing (NGS) analysis. Due to the advancement in the NGS technology, the genome data size was increased rapidly and these higher volumes of genome data need to be processed by empirical parallelism. Generally, these NGS data will be processed by traditional parallel processing approaches like (i) thread parallelization (ii) Data parallelization and (iii) Concurrent parallelization, which are their own performance bottlenecks in, thread scalability, scattering/gathering of data and memory bandwidth limitations respectively. To eliminate these drawbacks, we introduced the hybrid parallelization approach called “data-parallel with concurrent parallelization” to process our genome alignment. We used BWA MEM algorithm for aligning human genome sequence, which are dominated by huge memory intensive operations and the performance is limited due to cache/TLB misses. To eliminate the cache/TLB miss, the genome data is partitioned into multiple pieces (i.e., reducing the read size) using data parallelization and concurrently processing these multiple pieces of genome data within the same cache/memory hierarchy. Hence, the performance of proposed data-parallel with concurrent parallelization is 45% better than traditional parallelization approaches. Additionally, we provided proof of concept to process higher volume of genome data using BWA MEM algorithm on the high-end desktop machines.","PeriodicalId":311953,"journal":{"name":"2014 International Conference on Parallel, Distributed and Grid Computing","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126602235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Analysis of resource mapping strategies for fault tolerant scheduling on grid computing","authors":"Shafi Jindal, R. Bansal, S. Bansal","doi":"10.1109/PDGC.2014.7030778","DOIUrl":"https://doi.org/10.1109/PDGC.2014.7030778","url":null,"abstract":"Grid environment has significant challenges due to failures encountered during job execution. Grid infrastructure must have mechanisms to deal with faults while also providing efficient and reliable services to its end users. Therefore fault tolerance in such an environment could not be ignored. In this paper, two strategies such as Maximum Mapping Strategy (MMS) and Exact Mapping Strategy (EMS) are proposed for fault tolerant job scheduling in grid. Whenever the scheduler has jobs to schedule, MMS and EMS selects the resources based on the maximum suitable mapping and exact mapping of job requirement value with the resource capable value respectively. If failure occurs during execution, re-schedules the jobs using the last saved state according to the failure rate of the resources. EMS approach is effective in the sense that all resources are busy at the same time so that resources are utilized in better way and Make-span i.e. the time taken by all jobs to be finished processing is also reduced. Failure rate is reduces in EMS because failures due to speed and memory and bandwidth is less as load is distributed almost equally among resources compared to MMS.","PeriodicalId":311953,"journal":{"name":"2014 International Conference on Parallel, Distributed and Grid Computing","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116836655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"VM migration auction: Business oriented federation of cloud providers for scaling of application services","authors":"S. Majhi, P. Bera","doi":"10.1109/PDGC.2014.7030741","DOIUrl":"https://doi.org/10.1109/PDGC.2014.7030741","url":null,"abstract":"One of the key indicators of leveraging Cloud Computing is the penetration of e-business among Cloud Service Providers(CSP). The cloud computing applications are being developed across various domains to enable easy and efficient access to the data and services remotely. There is a potential for CSPs in applying e-business technologies, especially in the migration process between virtual machines (VM) running in different hosts. It is to enable efficient computing, resource sharing and to provide a real time response. There is a need to integrate an auction (bidding) in the VM migration process by applying new business models in the cloud computing marketplace to ensure competitiveness among CSPs. This paper describes an effort to establish a novel bidding process for the VM migration process in Cloud environment for e-business. The Internet-based auction process has been developed by considering English and Dutch auction. Various components for VM auction (actors, relations, VM, and business model) are presented. The suitable architecture in the VM auction service and the required tools are described. The IDEF0 model has been used for the central functionality of the broker service. In this proposed approach, the objective is to make independent CSPs function in a co-operative manner to provide uninterrupted service to the users on their interest and preference.","PeriodicalId":311953,"journal":{"name":"2014 International Conference on Parallel, Distributed and Grid Computing","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128591420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A log data analytics based scheduling in open source cloud software","authors":"V. Srikrishnan, E. Sivasankar, R. Pitchiah","doi":"10.1109/PDGC.2014.7030777","DOIUrl":"https://doi.org/10.1109/PDGC.2014.7030777","url":null,"abstract":"The paper proposes a Log Data Analytics based Scheduling in the Private Cloud environment in order to boost up the probability of launching Virtual Machine successfully. Analytics is applied to the logs maintained by Eucalyptus, a Open Source Cloud Software. The Proposed Methodology focuses on scheduling in the private cloud built with Eucalyptus Cloud Software, so that the proposed Scheduling at Eucalyptus Cloud Software is Analytics based; From the logs certain use cases are determined, which shall be used for capacity planning. Analytics is a conjucture, grouping, investigation, tracking and reporting of data for the cause of perception and optimization of utilization.","PeriodicalId":311953,"journal":{"name":"2014 International Conference on Parallel, Distributed and Grid Computing","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130871978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Model driven reverse engineering of user interface — A comparative study of static and dynamic model generation tools","authors":"Smita Agarwal, A. Aggarwal","doi":"10.1109/PDGC.2014.7030754","DOIUrl":"https://doi.org/10.1109/PDGC.2014.7030754","url":null,"abstract":"Many legacy applications may be outdated due to obsolete user interface however they still carry the precious business information about the data and processes. Such applications can be modernized by reengineering their user interface. The first step in modernization is the reverse engineering of the user interface. In this paper we compare the techniques to reverse engineer a simple java based application to identify the best possible technique for model generation.","PeriodicalId":311953,"journal":{"name":"2014 International Conference on Parallel, Distributed and Grid Computing","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125866848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"CT image noise reduction based on adaptive wiener filtering with Wavelet packet thresholding","authors":"M. Diwakar, M. Kumar","doi":"10.1109/PDGC.2014.7030722","DOIUrl":"https://doi.org/10.1109/PDGC.2014.7030722","url":null,"abstract":"Computed Tomography (CT) is streamlined in radiological diagnostics and has become an imperative tool in medical examinations. The difficulty that arises with the demand is to improve CT image quality without increasing dose. In this paper, Wavelet based noise reduction technique is proposed to improve image quality where adaptive Wiener filtering and Wavelet Packet Threshold (WPT) algorithm are applied. The Noisy CT image is decomposed using DWT, where approximation part is filtered using WPT algorithm and detail part is filtered by the adaptive Wiener filtering. By using the level dependent, the wavelet packet tree coefficients are calculated using optimal linear interpolation shrinkage function. Denoised image is acquired using wavelet packet reconstruction and inverse DWT. The value of the peak signal to noise ratio (PSNR) is used as the measure of image visual quality. Experimental results demonstrate that the proposed method improves the image visual quality in respect of noise removal and edge preservation.","PeriodicalId":311953,"journal":{"name":"2014 International Conference on Parallel, Distributed and Grid Computing","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125875885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Software rejuvenation in cloud systems using neural networks","authors":"Ch. Sudhakar, Ishan Shah, T. Ramesh","doi":"10.1109/PDGC.2014.7030747","DOIUrl":"https://doi.org/10.1109/PDGC.2014.7030747","url":null,"abstract":"Virtual Machine Monitor (VMM) is very important for the cloud and data center environment. VMM runs continuously for a long time and hence encounters the problem of software aging. VMM experiences failure because of software aging. In order to prevent the VMM failure caused by software aging, a proactive fault management approach called software rejuvenation is used. There are various software rejuvenation approaches existing in literature that can be broadly categorized into two categories namely model based approaches and measurement based approaches. Time to failure is predicted in measurement based approaches by monitoring the resource usage statistics. There can be any non-linear relationship between resource usage statistics and the time to failure. Such a nonlinear function can be approximated using Artificial Neural Networks (ANN). The change in the value of attributes of resources is given as input to ANN and new value of time to failure is generated as output. Experiments shows that if there is some pattern in the arrival and departure of the VMs, then the prediction is more accurate.","PeriodicalId":311953,"journal":{"name":"2014 International Conference on Parallel, Distributed and Grid Computing","volume":"53 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120844221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Parallel syntax analysis on multi-core machines","authors":"Amit Barve, B. K. Joshi","doi":"10.1109/PDGC.2014.7030743","DOIUrl":"https://doi.org/10.1109/PDGC.2014.7030743","url":null,"abstract":"A multi-core machine has more than one execution unit per CPU on single motherboard. With the advent of multi-core machines parallelization has become an essential part in recent compiler research. Parallel parsing is one of the areas that still needs significant work to utilize the inherent power of multi-core architecture. This paper presents an algorithm that performs parallel syntax analysis of C programs on multi-core architecture. Reasonable speed-up up to 6 was achieved on syntax analysis of C files of GCC 4.8.3.","PeriodicalId":311953,"journal":{"name":"2014 International Conference on Parallel, Distributed and Grid Computing","volume":"142 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131948421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mohsin Khan, N. Priyanka, W. Ahmed, N. Radhika, M. Pavithra, K. Parimala
{"title":"Understanding source-to-source transformations for frequent porting of applications on changing cloud architectures","authors":"Mohsin Khan, N. Priyanka, W. Ahmed, N. Radhika, M. Pavithra, K. Parimala","doi":"10.1109/PDGC.2014.7030769","DOIUrl":"https://doi.org/10.1109/PDGC.2014.7030769","url":null,"abstract":"Writing code for heterogeneous architectures with processors and accelerators from multiple vendors from scratch or translating existing serial code, a lot of effort and investment will be required from the application developer. This problem will become more prominent when HPC applications are moved into the Cloud as Cloud providers frequently update their architectures to keep with market trends. In these scenarios, automatic parallelization tools will definitely have an important role to play. An important constituent of these tools would be the ability to perform pertinent domain decomposition of the serial code to maximize utilization of the available computational elements. One of the first steps in this direction is to understand the role of the number and type of computational element in a heterogeneous architecture to the overall performance of an application. This paper presents observations made on architectures with different types and number of computational elements using two case studies on five different architectures with different types and number of computational elements. Results show that the perceived speedup and actual speedup are not related.","PeriodicalId":311953,"journal":{"name":"2014 International Conference on Parallel, Distributed and Grid Computing","volume":"240 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116253836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}