{"title":"Optimisation of the queries execution plan in cloud data warehouses","authors":"Ettaoufik Abdelaziz, Ouzzif Mohamed","doi":"10.1109/WICT.2015.7489659","DOIUrl":null,"url":null,"abstract":"Our everyday massive data processing activities and making correct decisions require a specific work environment. The cloud computing provides a flexible environment for customers to host and process their information through an outsourced infrastructure. This information was habitually located on local servers. Many applications dealing with massive data is routed to the cloud. Data Warehouse (DW) also benefit from this new paradigm to provide analytical data online and in real time. DW in the Cloud benefited of its advantages such flexibility, availability, adaptability, scalability, virtualization, etc. Improving the DW performance in the cloud requires the optimization of data processing time. The classical optimization techniques (indexing, materialized views and fragmentation) are still essential for DW in the cloud. The DW is partitioned before being distributed across multiple servers (nodes) in the Cloud. When queries containing multiple joins or ask voluminous data stored on multiple nodes, inter-node communication increases and consequently the DW performance degrades. In this paper, we propose an approach for improving the performance of DW in the cloud. Our approach is based on a classification technique of requests received by the nodes. For this purpose we use an algorithm based on the MapReduce programming model, this algorithm allows to identify the list of requests sent to the DW hosted in the cloud, and classify queries by the publication frequency. From the list of search queries we propose a partitioning scheme of DW on different nodes in order to reduce the inter-node communication and therefore minimize the queries processing time.","PeriodicalId":246557,"journal":{"name":"2015 5th World Congress on Information and Communication Technologies (WICT)","volume":"94 S87","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 5th World Congress on Information and Communication Technologies (WICT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/WICT.2015.7489659","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
Our everyday massive data processing activities and making correct decisions require a specific work environment. The cloud computing provides a flexible environment for customers to host and process their information through an outsourced infrastructure. This information was habitually located on local servers. Many applications dealing with massive data is routed to the cloud. Data Warehouse (DW) also benefit from this new paradigm to provide analytical data online and in real time. DW in the Cloud benefited of its advantages such flexibility, availability, adaptability, scalability, virtualization, etc. Improving the DW performance in the cloud requires the optimization of data processing time. The classical optimization techniques (indexing, materialized views and fragmentation) are still essential for DW in the cloud. The DW is partitioned before being distributed across multiple servers (nodes) in the Cloud. When queries containing multiple joins or ask voluminous data stored on multiple nodes, inter-node communication increases and consequently the DW performance degrades. In this paper, we propose an approach for improving the performance of DW in the cloud. Our approach is based on a classification technique of requests received by the nodes. For this purpose we use an algorithm based on the MapReduce programming model, this algorithm allows to identify the list of requests sent to the DW hosted in the cloud, and classify queries by the publication frequency. From the list of search queries we propose a partitioning scheme of DW on different nodes in order to reduce the inter-node communication and therefore minimize the queries processing time.