IEEE Transactions on Cloud Computing最新文献

筛选
英文 中文
FUSIONIZE++: Improving Serverless Application Performance Using Dynamic Task Inlining and Infrastructure Optimization FUSIONIZE++:利用动态任务内嵌和基础设施优化提高无服务器应用程序性能
IF 5.3 2区 计算机科学
IEEE Transactions on Cloud Computing Pub Date : 2024-08-28 DOI: 10.1109/TCC.2024.3451108
Trever Schirmer;Joel Scheuner;Tobias Pfandzelter;David Bermbach
{"title":"FUSIONIZE++: Improving Serverless Application Performance Using Dynamic Task Inlining and Infrastructure Optimization","authors":"Trever Schirmer;Joel Scheuner;Tobias Pfandzelter;David Bermbach","doi":"10.1109/TCC.2024.3451108","DOIUrl":"10.1109/TCC.2024.3451108","url":null,"abstract":"The Function-as-a-Service (FaaS) execution model increases developer productivity by removing operational concerns such as managing hardware or software runtimes. Developers, however, still need to partition their applications into FaaS functions, which is error-prone and complex: Encapsulating only the smallest logical unit of an application as a FaaS function maximizes flexibility and reusability. Yet, it also leads to invocation overheads, additional cold starts, and may increase cost due to double billing during synchronous invocations. Conversely, deploying an entire application as a single FaaS function avoids these overheads but decreases flexibility. In this paper we present \u0000<sc>Fusionize</small>\u0000, a framework that automates optimizing for this trade-off by automatically fusing application code into an optimized multi-function composition. Developers only need to write fine-grained application code following the serverless model, while \u0000<sc>Fusionize</small>\u0000 automatically fuses different parts of the application into FaaS functions, manages their interactions, and configures the underlying infrastructure. At runtime, it monitors application performance and adapts it to minimize request-response latency and costs. Real-world use cases show that \u0000<sc>Fusionize</small>\u0000 can improve the deployment artifacts of the application, reducing both median request-response latency and cost of an example IoT application by more than 35%.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"12 4","pages":"1172-1185"},"PeriodicalIF":5.3,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142218874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Transfer Learning Based Multi-Objective Evolutionary Algorithm for Dynamic Workflow Scheduling in the Cloud 基于迁移学习的云计算动态工作流调度多目标进化算法
IF 5.3 2区 计算机科学
IEEE Transactions on Cloud Computing Pub Date : 2024-08-28 DOI: 10.1109/TCC.2024.3450858
Huamao Xie;Ding Ding;Lihong Zhao;Kaixuan Kang
{"title":"Transfer Learning Based Multi-Objective Evolutionary Algorithm for Dynamic Workflow Scheduling in the Cloud","authors":"Huamao Xie;Ding Ding;Lihong Zhao;Kaixuan Kang","doi":"10.1109/TCC.2024.3450858","DOIUrl":"10.1109/TCC.2024.3450858","url":null,"abstract":"Managing scientific applications in the Cloud poses many challenges in terms of workflow scheduling, especially in handling multi-objective workflow scheduling under quality of service (QoS) constraints. However, most studies address the workflow scheduling problem on the premise of the unchanged environment, without considering the high dynamics of the Cloud. In this paper, we model the constrained workflow scheduling in a dynamic Cloud environment as a dynamic multi-objective optimization problem with preferences, and propose a transfer learning based multi-objective evolutionary algorithm (TL-MOEA) to tackle the workflow scheduling problem of dynamic nature. Specifically, an elite-led transfer learning strategy is proposed to explore effective parameter adaptation for the MOEA by transferring helpful knowledge from elite solutions in the past environment to accelerate the optimization process. In addition, a multi-space diversity learning strategy is developed to maintain the diversity of the population. To satisfy various QoS constraints of workflow scheduling, a preference-based selection strategy is further designed to enable promising solutions for each iteration. Extensive experiments on five well-known scientific workflows demonstrate that TL-MOEA can achieve highly competitive performance compared to several state-of-art algorithms, and can obtain triple win solutions with optimization objectives of minimizing makespan, cost and energy consumption for dynamic workflow scheduling with user-defined constraints.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"12 4","pages":"1200-1217"},"PeriodicalIF":5.3,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142218877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Burst Load Frequency Prediction Based on Google Cloud Platform Server 基于谷歌云平台服务器的突发负载频率预测
IF 5.3 2区 计算机科学
IEEE Transactions on Cloud Computing Pub Date : 2024-08-26 DOI: 10.1109/TCC.2024.3449884
Hui Wang
{"title":"Burst Load Frequency Prediction Based on Google Cloud Platform Server","authors":"Hui Wang","doi":"10.1109/TCC.2024.3449884","DOIUrl":"10.1109/TCC.2024.3449884","url":null,"abstract":"The widespread use of cloud computing platforms has increased server load pressure. Especially the frequent occurrence of burst load problems caused resource waste, data damage and loss, and security loopholes, which have posed a severe threat to the service capabilities and stability of the cloud platform. To reduce or avoid the harm caused by burst load problems, this article conducts in-depth research on the frequency of burst loads. Based on Google cluster tracking data, this paper proposes a new burst load frequency calculation model called the ”Two-step Judgment” and a burst load frequency prediction model called the ”Combined-LSTM. ” The Two-step Judgment model uses data attributes for rough judgment and then uses the random forest algorithm for precise judgment to ensure accurate calculation of the frequency of burst loads. The Combined-LSTM model is a multi-input single-output prediction model constructed using a multi-model ensemble method. This model combines the advantages of the 1-Dimensional Convolutional Neural Network(1D-CNN), Gated Recurrent Unit(GRU), and Long Short-Term Memory(LSTM) and uses parallel computing methods to achieve accurate prediction of burst load frequency. According to the model evaluation, the Two-step Judgment model and the Combined-LSTM model showed significant advantages over other prediction models in accuracy, generalization ability, and time complexity.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"12 4","pages":"1158-1171"},"PeriodicalIF":5.3,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142218876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Novel Scheduling Approach for Spark Workflow Tasks With Deadline and Uncertain Performance in Multi-Cloud Networks 多云网络中具有截止日期和不确定性能的 Spark 工作流任务的新型调度方法
IF 5.3 2区 计算机科学
IEEE Transactions on Cloud Computing Pub Date : 2024-08-26 DOI: 10.1109/TCC.2024.3449771
Kamran Yaseen Rajput;Xiaoping Li;Jinquan Zhang;Abdullah Lakhan
{"title":"A Novel Scheduling Approach for Spark Workflow Tasks With Deadline and Uncertain Performance in Multi-Cloud Networks","authors":"Kamran Yaseen Rajput;Xiaoping Li;Jinquan Zhang;Abdullah Lakhan","doi":"10.1109/TCC.2024.3449771","DOIUrl":"10.1109/TCC.2024.3449771","url":null,"abstract":"These days, the usage of cloud computing services for different applications has been growing progressively. The applications, including business, commerce, healthcare, and others, require additional computation capabilities for their executions. To fulfil their expanding computational demands, cloud computing offers a pay-as-you-go billing model to run these applications cost-effectively. However, due to the complex requirements of these applications, more than one cloud system is required because single-cloud solutions are often limited by resource constraints, such as inadequate storage and computing power, as well as single-point failures that can compromise the integrity of the entire application. Consequently, multi-cloud strategies, which provide more scalable storage and computing resources, are becoming increasingly popular. However, the multi-cloud landscape consists of many cloud providers, and effectively managing workflow scheduling presents a significant hurdle in this dynamic environment. This paper focuses on scheduling Spark workflow tasks in multi-cloud networks. It addresses the challenges posed by different pricing models, dynamic resource provisioning, inter- and intra-transmission time, and the instability of resource performance. To solve these challenges, we propose a novel heuristic-based approach that considers different constraints such as VM instances heterogeneity, priority constraints, transmission times, and the impact of performance uncertainty. The goal is to schedule all tasks on virtual machines (VMs) with rental costs as low as possible while meeting workflow deadlines. The simulation results show that the proposed method effectively schedules Spark workflow tasks in multi-cloud networks, improving the scheduling performance by 50% compared to existing approaches.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"12 4","pages":"1145-1157"},"PeriodicalIF":5.3,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142218875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enabling Authorized Fine-Grained Data Retrieval Over Aggregated Encrypted Medical Data in Cloud-Assisted E-Health Systems 在云辅助电子医疗系统中实现对聚合加密医疗数据的授权细粒度数据检索
IF 5.3 2区 计算机科学
IEEE Transactions on Cloud Computing Pub Date : 2024-08-19 DOI: 10.1109/TCC.2024.3445430
Wei Tang;Xiaojun Zhang;Dawu Gu;Chao Huang;Jingting Xue;Xiangyu Liang
{"title":"Enabling Authorized Fine-Grained Data Retrieval Over Aggregated Encrypted Medical Data in Cloud-Assisted E-Health Systems","authors":"Wei Tang;Xiaojun Zhang;Dawu Gu;Chao Huang;Jingting Xue;Xiangyu Liang","doi":"10.1109/TCC.2024.3445430","DOIUrl":"10.1109/TCC.2024.3445430","url":null,"abstract":"Encrypted medical data outsourced to cloud servers can be used for personal health certification, health monitoring, and medical research. These data are essential to support the development of the medical industry. However, the traditional peer-to-peer data-sharing paradigm can lead to data abuse by malicious data analysis centers. Moreover, the encryption used to protect users’ outsourced privacy restricts the flexibility of data retrieval. Based on the modified double trapdoor cryptosystem, we propose an authorized data retrieval scheme over aggregated encrypted medical data (ADR-AED) in cloud-assisted e-healthcare systems. In ADR-AED, patients can access and decrypt personal data and authorize the data analysis center (DAC) to retrieve corresponding data. Specifically, we design an authorized retrieval-test mechanism for a group of patients to DAC. This allows DAC to extract valuable information from a threshold number of authorized users. Additionally, each patient can flexibly retrieve fine-grained medical data in different periods and submit them to a doctor for diagnostic analysis. The security analysis and performance evaluation demonstrate the feasibility of ADR-AED in the deployment of cloud-assisted e-healthcare systems.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"12 4","pages":"1131-1144"},"PeriodicalIF":5.3,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142218880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient Secure CNN Inference: A Multi-Server Framework Based on Conditional Separable and Homomorphic Encryption 高效安全的 CNN 推断:基于条件可分离和同态加密的多服务器框架
IF 5.3 2区 计算机科学
IEEE Transactions on Cloud Computing Pub Date : 2024-08-14 DOI: 10.1109/TCC.2024.3443405
Longlong Sun;Hui Li;Yanguo Peng;Jiangtao Cui
{"title":"Efficient Secure CNN Inference: A Multi-Server Framework Based on Conditional Separable and Homomorphic Encryption","authors":"Longlong Sun;Hui Li;Yanguo Peng;Jiangtao Cui","doi":"10.1109/TCC.2024.3443405","DOIUrl":"10.1109/TCC.2024.3443405","url":null,"abstract":"Deep learning inference has become a fundamental component of cloud service providers, while privacy issues during services have received significant attention. Although many privacy-preserving schemes have been proposed, they require further improvement. In this article, we propose \u0000<i>Serpens</i>\u0000, an efficient convolutional neural network (CNN) secure inference framework to protect users’ uploaded data. We introduce a pair of novel concepts, namely separable and conditional separable, to determine whether a layer in CNNs can be computed over multiple servers or not. We demonstrate that linear layers are separable and construct factor-functions to reduce their overhead to nearly zero. For the two nonlinear layers, i.e., ReLU and max pooling, we design four secure protocols based on homomorphic encryption and random masks for two- and n-server settings. These protocols are essentially different from existing schemes, which are primarily based on garbled circuits. In addition, we extensively propose a method to split the image securely. The experimental results demonstrate that \u0000<i>Serpens</i>\u0000 is \u0000<inline-formula><tex-math>$60times -197times$</tex-math></inline-formula>\u0000 faster than the previous scheme in the two-server setting. The superiority of \u0000<i>Serpens</i>\u0000 is even more significant in the n-server setting, only less than an order of magnitude slower than performing plaintext inference over clouds.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"12 4","pages":"1116-1130"},"PeriodicalIF":5.3,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142218878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enabling Privacy-Preserving Parallel Computation of Linear Regression in Edge Computing Networks 在边缘计算网络中实现线性回归的隐私保护并行计算
IF 5.3 2区 计算机科学
IEEE Transactions on Cloud Computing Pub Date : 2024-08-08 DOI: 10.1109/TCC.2024.3440656
Wenjing Gao;Jia Yu;Huaqun Wang
{"title":"Enabling Privacy-Preserving Parallel Computation of Linear Regression in Edge Computing Networks","authors":"Wenjing Gao;Jia Yu;Huaqun Wang","doi":"10.1109/TCC.2024.3440656","DOIUrl":"10.1109/TCC.2024.3440656","url":null,"abstract":"Linear regression is a classical statistical model with a wide range of applications. The function of linear regression is to predict the value of a dependent variable (the output) given an independent variable (the input). The training of a linear regression model is to find a linear relationship between the input and the output based on data samples. IoT applications usually require real-time data processing. Nonetheless, the existing schemes about privacy-preserving outsourcing of linear regression cannot fully meet the rapid response requirement for computation. To address this issue, we consider employing multiple edge servers to accomplish privacy-preserving parallel computation of linear regression. We propose two novel solutions based on edge servers in edge computing networks and construct two efficient schemes for linear regression. In the first scheme, we present a new blinding technique for data privacy protection. Two edge servers are employed to execute the encrypted linear regression task in parallel. To further enhance the efficiency, we design an adaptive parallel algorithm, which is adopted in the second scheme. Multiple edge servers are employed in the second scheme to achieve higher efficiency. We analyze the correctness, privacy, and verifiability of the proposed schemes. Finally, we assess the computational overhead of the proposed schemes and conduct experiments to validate the performance advantages of the proposed schemes.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"12 4","pages":"1103-1115"},"PeriodicalIF":5.3,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141937580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BaaSLess: Backend-as-a-Service (BaaS)-Enabled Workflows in Federated Serverless Infrastructures BAASLESS:联合无服务器基础设施中支持后台即服务(BaaS)的工作流
IF 5.3 2区 计算机科学
IEEE Transactions on Cloud Computing Pub Date : 2024-08-06 DOI: 10.1109/TCC.2024.3439268
Thomas Larcher;Philipp Gritsch;Stefan Nastic;Sashko Ristov
{"title":"BaaSLess: Backend-as-a-Service (BaaS)-Enabled Workflows in Federated Serverless Infrastructures","authors":"Thomas Larcher;Philipp Gritsch;Stefan Nastic;Sashko Ristov","doi":"10.1109/TCC.2024.3439268","DOIUrl":"10.1109/TCC.2024.3439268","url":null,"abstract":"Serverless is a popular paradigm for expressing compute-intensive applications as serverless workflows. In practice, a significant portion of the computing is typically offloaded to various Backend-as-a-Service (BaaS) cloud services. The recent rise of federated serverless and Sky computing offers cost and performance advantages for these BaaS-enabled serverless workflows. However, due to vendor lock-in and lack of service interoperability, many challenges remain that impact the development, deployment, and scheduling of BaaS-enabled serverless workflows in federated serverless infrastructures. This paper introduces \u0000<small>BaaSLess</small>\u0000 – a novel platform that delivers global and dynamic federated BaaS to serverless workflows. \u0000<small>BaaSLess</small>\u0000 provides: i) a novel SDK for uniform and dynamic access to federated BaaS services, reducing the complexity associated with the development of BaaS-enabled serverless workflows, ii) a novel globally-federated serverless BaaS framework that delivers a suite of BaaS-less ML services, including text-to-speech, speech-to-text, translation, and OCR, together with a globally-federated storage infrastructure, comprising AWS and Google cloud providers, and iii) a novel model and an algorithm for scheduling BaaS-enabled serverless workflows to improve their performance. Experimental results using three complementary BaaS-enabled serverless workflows show that \u0000<small>BaaSLess</small>\u0000 improves workflow execution time by up to \u0000<inline-formula><tex-math>$2.95times$</tex-math></inline-formula>\u0000 compared to the state-of-the-art serverless schedulers, often at a lower cost.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"12 4","pages":"1088-1102"},"PeriodicalIF":5.3,"publicationDate":"2024-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10623687","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141937576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Scalable and Write-Optimized Disaggregated B+-Tree With Adaptive Cache Assistance 具有自适应缓存辅助功能的可扩展且写优化的分解 B+ 树
IF 5.3 2区 计算机科学
IEEE Transactions on Cloud Computing Pub Date : 2024-08-02 DOI: 10.1109/TCC.2024.3437472
Hang An;Fang Wang;Dan Feng;Xiaomin Zou;Zefeng Liu;Jianshun Zhang
{"title":"A Scalable and Write-Optimized Disaggregated B+-Tree With Adaptive Cache Assistance","authors":"Hang An;Fang Wang;Dan Feng;Xiaomin Zou;Zefeng Liu;Jianshun Zhang","doi":"10.1109/TCC.2024.3437472","DOIUrl":"10.1109/TCC.2024.3437472","url":null,"abstract":"Disaggregated memory (DM) architecture separates CPU and DRAM into computing/memory resource pools and interconnects them with high-speed networks. Storage systems on DM locate data by distributed index. However, existing distributed indexes either suffer from prohibitive synchronization overhead of write operation or sacrifice the performance of read operation, resulting in low throughput, high tail latency, and challenging trade-off. In this paper, we present Marlin+, a scalable and optimized B+-tree on DM. Marlin+ provides atomic granularity synchronization between write operations via three strategies: 1) a concurrent algorithm that is friendly to IDU operations (Insert, Delete, and Update), enabling different clients to concurrently operate on the same leaf node, 2) shared-exclusive leaf node lock, effectively preventing conflicts between index structure modification operation (SMO) and IDU operations, and 3) critical path compression of write to reduce latency of write operation. Moreover, Marlin+ proposes an adaptive remote address cache to accelerate the access of hot data. Compared to the state-of-the-art schemes based on DM, Marlin achieves 2.21× higher throughput and 83.4% lower P99 latency under YCSB hybrid workloads. Compared to Marlin, Marlin+ improves the throughput by up to 1.58× and reduces the P50 latency by up to 50.5% under YCSB read-intensive workloads.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"12 4","pages":"1074-1087"},"PeriodicalIF":5.3,"publicationDate":"2024-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141885524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sparkle: Deep Learning Driven Autotuning for Taming High-Dimensionality of Spark Deployments Sparkle:深度学习驱动的自动调整,用于控制 Spark 部署的高维性
IF 5.3 2区 计算机科学
IEEE Transactions on Cloud Computing Pub Date : 2024-08-02 DOI: 10.1109/TCC.2024.3437484
Dimosthenis Masouros;George Retsinas;Sotirios Xydis;Dimitrios Soudris
{"title":"Sparkle: Deep Learning Driven Autotuning for Taming High-Dimensionality of Spark Deployments","authors":"Dimosthenis Masouros;George Retsinas;Sotirios Xydis;Dimitrios Soudris","doi":"10.1109/TCC.2024.3437484","DOIUrl":"10.1109/TCC.2024.3437484","url":null,"abstract":"The exponential growth of data in the Cloud has highlighted the need for more efficient data processing. In-Memory Computing frameworks (e.g., Spark) offer improved efficiency for large-scale data analytics, however, they also provide a plethora of configuration parameters that affect the resource consumption and performance of applications. Manually optimizing these parameters is a time-consuming process, due to \u0000<i>i)</i>\u0000 the high-dimensional configuration space, \u0000<i>ii)</i>\u0000 the complex inter-relationship between different parameters, \u0000<i>iii)</i>\u0000 the diverse nature of workloads and \u0000<i>iv)</i>\u0000 the inherent data heterogeneity. We introduce \u0000<i>Sparkle</i>\u0000, an end-to-end deep learning-based framework for automating the performance modeling and tuning of Spark applications. We introduce a modular DNN architecture that expands to the entire Spark parameter configuration space and provides a universal performance modeling approach, completely eliminating the need for human or statistical reasoning. By employing a genetic optimization process, \u0000<i>Sparkle</i>\u0000 quickly traverses the design space and identifies highly optimized Spark configurations. Our experiments on the HiBench benchmark suite show that \u0000<i>Sparkle</i>\u0000 delivers an average prediction accuracy of 93%, with high generalization capabilities, i.e., \u0000<inline-formula><tex-math>$approx 80%$</tex-math></inline-formula>\u0000 accuracy for unseen workloads, dataset sizes and configurations, outperforming state-of-art. Regarding end-to-end optimization, \u0000<i>Sparkle</i>\u0000 efficiently explores Spark's high-dimensional parameter space, delivering new dominant Spark configurations, which correspond to 65% Pareto coverage w.r.t its Spark native optimization counterpart.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"12 4","pages":"1058-1073"},"PeriodicalIF":5.3,"publicationDate":"2024-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141885525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信