IEEE Transactions on Cloud Computing最新文献

筛选
英文 中文
CloudBrain-ReconAI: A Cloud Computing Platform for MRI Reconstruction and Radiologists’ Image Quality Evaluation 云脑-ReconAI:用于核磁共振成像重建和放射医师图像质量评估的云计算平台
IF 5.3 2区 计算机科学
IEEE Transactions on Cloud Computing Pub Date : 2024-10-08 DOI: 10.1109/TCC.2024.3476418
Yirong Zhou;Chen Qian;Jiayu Li;Zi Wang;Yu Hu;Biao Qu;Liuhong Zhu;Jianjun Zhou;Taishan Kang;Jianzhong Lin;Qing Hong;Jiyang Dong;Di Guo;Xiaobo Qu
{"title":"CloudBrain-ReconAI: A Cloud Computing Platform for MRI Reconstruction and Radiologists’ Image Quality Evaluation","authors":"Yirong Zhou;Chen Qian;Jiayu Li;Zi Wang;Yu Hu;Biao Qu;Liuhong Zhu;Jianjun Zhou;Taishan Kang;Jianzhong Lin;Qing Hong;Jiyang Dong;Di Guo;Xiaobo Qu","doi":"10.1109/TCC.2024.3476418","DOIUrl":"https://doi.org/10.1109/TCC.2024.3476418","url":null,"abstract":"Efficient collaboration between engineers and radiologists is important for image reconstruction algorithm development and image quality evaluation in magnetic resonance imaging (MRI). Here, we develop CloudBrain-ReconAI, an online cloud computing platform, for algorithm deployment, fast and blind reader study. This platform supports online image reconstruction using state-of-the-art artificial intelligence and compressed sensing algorithms with applications for fast imaging (Cartesian and non-Cartesian sampling) and high-resolution diffusion imaging. Through visiting the website, radiologists can easily score and mark images. Then, automatic statistical analysis will be provided.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"12 4","pages":"1359-1371"},"PeriodicalIF":5.3,"publicationDate":"2024-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142798037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
D-STACK: High Throughput DNN Inference by Effective Multiplexing and Spatio-Temporal Scheduling of GPUs D-STACK:通过 GPU 的有效复用和时空调度实现高吞吐量 DNN 推理
IF 5.3 2区 计算机科学
IEEE Transactions on Cloud Computing Pub Date : 2024-10-07 DOI: 10.1109/TCC.2024.3476210
Aditya Dhakal;Sameer G. Kulkarni;K. K. Ramakrishnan
{"title":"D-STACK: High Throughput DNN Inference by Effective Multiplexing and Spatio-Temporal Scheduling of GPUs","authors":"Aditya Dhakal;Sameer G. Kulkarni;K. K. Ramakrishnan","doi":"10.1109/TCC.2024.3476210","DOIUrl":"https://doi.org/10.1109/TCC.2024.3476210","url":null,"abstract":"Hardware accelerators such as GPUs are required for real-time, low latency inference with Deep Neural Networks (DNN). Providing inference services in the cloud can be resource intensive, and effectively utilizing accelerators in the cloud is important. Spatial multiplexing of the GPU, while limiting the GPU resources (GPU%) to each DNN to the right amount, leads to higher GPU utilization and higher inference throughput. Right-sizing the GPU for each DNN the optimal batching of requests to balance throughput and service level objectives (SLOs), and maximizing throughput by appropriately scheduling DNNs are still significant challenges.This article introduces a dynamic and fair spatio-temporal scheduler (D-STACK) for multiple DNNs to run in the GPU concurrently. We develop and validate a model that estimates the parallelism each DNN can utilize and a lightweight optimization formulation to find an efficient batch size for each DNN. Our holistic inference framework provides high throughput while meeting application SLOs. We compare D-STACK with other GPU multiplexing and scheduling methods (e.g., NVIDIA Triton, Clipper, Nexus), using popular DNN models. Our controlled experiments with multiplexing several popular DNN models achieve up to \u0000<inline-formula><tex-math>$1.6times$</tex-math></inline-formula>\u0000 improvement in GPU utilization and up to \u0000<inline-formula><tex-math>$4times$</tex-math></inline-formula>\u0000 improvement in inference throughput.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"12 4","pages":"1344-1358"},"PeriodicalIF":5.3,"publicationDate":"2024-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142797987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FaaSCtrl: A Comprehensive-Latency Controller for Serverless Platforms FaaSCtrl:无服务器平台的综合延迟控制器
IF 5.3 2区 计算机科学
IEEE Transactions on Cloud Computing Pub Date : 2024-10-02 DOI: 10.1109/TCC.2024.3473015
Abhisek Panda;Smruti R. Sarangi
{"title":"FaaSCtrl: A Comprehensive-Latency Controller for Serverless Platforms","authors":"Abhisek Panda;Smruti R. Sarangi","doi":"10.1109/TCC.2024.3473015","DOIUrl":"https://doi.org/10.1109/TCC.2024.3473015","url":null,"abstract":"Serverless computing systems have become very popular because of their natural advantages with respect to auto-scaling, load balancing and fast distributed processing. As of today, almost all serverless systems define two QoS classes: best-effort (\u0000<inline-formula><tex-math>$BE$</tex-math></inline-formula>\u0000) and latency-sensitive (\u0000<inline-formula><tex-math>$LS$</tex-math></inline-formula>\u0000). Systems typically do not offer any latency or QoS guarantees for \u0000<inline-formula><tex-math>$BE$</tex-math></inline-formula>\u0000 jobs and run them on a best-effort basis. In contrast, systems strive to minimize the processing time for \u0000<inline-formula><tex-math>$LS$</tex-math></inline-formula>\u0000 jobs. This work proposes a precise definition for these job classes and argues that we need to consider a bouquet of performance metrics for serverless applications, not just a single one. We thus propose the comprehensive latency (\u0000<inline-formula><tex-math>$CL$</tex-math></inline-formula>\u0000) that comprises the mean, tail latency, median and standard deviation of a series of invocations for a given serverless function. Next, we design a system \u0000<i>FaaSCtrl</i>\u0000, whose main objective is to ensure that every component of the \u0000<inline-formula><tex-math>$CL$</tex-math></inline-formula>\u0000 is within a prespecified limit for an LS application, and for BE applications, these components are minimized on a best-effort basis. Given the sheer complexity of the scheduling problem in a large multi-application setup, we use the method of surrogate functions in optimization theory to design a simpler optimization problem that relies on performance and fairness. We rigorously establish the relevance of these metrics through characterization studies. Instead of using standard approaches based on optimization theory, we use a much faster reinforcement learning (RL) based approach to tune the knobs that govern process scheduling in Linux, namely the real-time priority and the assigned number of cores. RL works well in this scenario because the benefit of a given optimization is probabilistic in nature, owing to the inherent complexity of the system. We show using rigorous experiments on a set of real-world workloads that \u0000<i>FaaSCtrl</i>\u0000 achieves its objectives for both LS and BE applications and outperforms the state-of-the-art by 36.9% (for tail response latency) and 44.6% (for response latency's std. dev.) for LS applications.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"12 4","pages":"1328-1343"},"PeriodicalIF":5.3,"publicationDate":"2024-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142797964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
QoS-Aware, Cost-Efficient Scheduling for Data-Intensive DAGs in Multi-Tier Computing Environment 多层计算环境下数据密集型dag的qos感知、成本高效调度
IF 5.3 2区 计算机科学
IEEE Transactions on Cloud Computing Pub Date : 2024-09-26 DOI: 10.1109/TCC.2024.3468913
Paridhika Kayal;Alberto Leon-Garcia
{"title":"QoS-Aware, Cost-Efficient Scheduling for Data-Intensive DAGs in Multi-Tier Computing Environment","authors":"Paridhika Kayal;Alberto Leon-Garcia","doi":"10.1109/TCC.2024.3468913","DOIUrl":"https://doi.org/10.1109/TCC.2024.3468913","url":null,"abstract":"In today’s scientific landscape, Directed Acyclic Graphs (DAGs) are pivotal for representing task dependencies in data-intensive applications. Traditionally, two dominant bottom-up DAG scheduling approaches exist: one overlooks communication contention and the other fails to exploit parallelization for improving latency. This study distinguishes itself by advocating a top-down approach prioritizing latency or cost optimization in multi-tier environments to fulfill QoS and SLA requirements. Our strategy effectively mitigates bandwidth contention and facilitates parallel executions, leading to substantial completion time reductions. Our findings suggest that myopic knowledge-based scheduling, emphasizing latency or cost minimization, can yield benefits comparable to its look-ahead counterparts. Through latency-efficient and cost-efficient topological sorting, our \u0000<italic>wDAGSplit</i>\u0000 strategy introduces a two-stage partitioning and scheduling approach. Its simplicity and adaptability extend its usability to DAGs of any scale. Evaluated on over 100,000 real-world DAG applications, \u0000<italic>wDAGSplit</i>\u0000 demonstrates latency enhancements of up to 80x compared to Edge-only scenarios, 15x to Near-Edge-only, and 6x to Cloud-only. In terms of cost, our approach achieves enhancements of up to 60x compared to Edge-only scenarios, 250x to NE-only, and 70x to Cloud-only. Moreover, for DAGs with 50 tasks, we achieve 5x reduced latency compared to previous approaches, along with a complexity reduction of up to 24 times.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"12 4","pages":"1314-1327"},"PeriodicalIF":5.3,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142777763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Anomaly Transformer Ensemble Model for Cloud Data Anomaly Detection 云数据异常检测的异常变压器集成模型
IF 5.3 2区 计算机科学
IEEE Transactions on Cloud Computing Pub Date : 2024-09-23 DOI: 10.1109/TCC.2024.3466174
Won Sakong;Jongyeop Kwon;Kyungha Min;Suyeon Wang;Wooju Kim
{"title":"Anomaly Transformer Ensemble Model for Cloud Data Anomaly Detection","authors":"Won Sakong;Jongyeop Kwon;Kyungha Min;Suyeon Wang;Wooju Kim","doi":"10.1109/TCC.2024.3466174","DOIUrl":"https://doi.org/10.1109/TCC.2024.3466174","url":null,"abstract":"The stability and user trust in cloud services depends on prompt detection and response to diverse anomalies. This study focuses on an Ensemble-based anomaly detection methodology that integrates log data with computing resource metrics, aiming to overcome the limitations of traditional single-data models. To process the unstructured nature of log data, we use the Drain Parser to transform it into a structured format, and Doc2Vec embeds it. The study adheres to a reconstruction-based approach for anomaly detection, specifically building upon the Anomaly Transformer model. The proposed model leverages the concept of an Anomaly Transformer based on the Attention mechanism. It integrates preprocessed metric data with log data for effective anomaly detection. Experiments were conducted using metric and log data collected from real-world cloud environments. The model’s performance was evaluated based on accuracy, recall, precision, f1 score, and AUROC. The results demonstrate that our proposed Ensemble-based model outperforms traditional models such as LSTM, VAR, and deeplog.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"12 4","pages":"1305-1313"},"PeriodicalIF":5.3,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142777685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
WorkloadDiff: Conditional Denoising Diffusion Probabilistic Models for Cloud Workload Prediction WorkloadDiff:用于云计算工作量预测的条件去噪扩散概率模型
IF 5.3 2区 计算机科学
IEEE Transactions on Cloud Computing Pub Date : 2024-09-16 DOI: 10.1109/TCC.2024.3461649
Weiping Zheng;Zongxiao Chen;Kaiyuan Zheng;Weijian Zheng;Yiqi Chen;Xiaomao Fan
{"title":"WorkloadDiff: Conditional Denoising Diffusion Probabilistic Models for Cloud Workload Prediction","authors":"Weiping Zheng;Zongxiao Chen;Kaiyuan Zheng;Weijian Zheng;Yiqi Chen;Xiaomao Fan","doi":"10.1109/TCC.2024.3461649","DOIUrl":"10.1109/TCC.2024.3461649","url":null,"abstract":"Accurate workload forecasting plays a crucial role in optimizing resource allocation, enhancing performance, and reducing energy consumption in cloud data centers. Deep learning-based methods have emerged as the dominant approach in this field, exhibiting exceptional performance. However, most existing methods lack the ability to quantify confidence, limiting their practical decision-making utility. To address this limitation, we propose a novel denoising diffusion probabilistic model (DDPM)-based method, termed WorkloadDiff, for multivariate probabilistic workload prediction. WorkloadDiff leverages both original and noisy signals from input conditions using a two-path neural network. Additionally, we introduce a multi-scale feature extraction method and an adaptive fusion approach to capture diverse temporal patterns within the workload. To enhance consistency between conditions and predicted values, we incorporate a resampling strategy into the inference of WorkloadDiff. Extensive experiments conducted on four public datasets demonstrate the superior performance of WorkloadDiff over all baseline models, establishing it as a robust tool for resource management in cloud data centers.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"12 4","pages":"1291-1304"},"PeriodicalIF":5.3,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142260182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Lightweight Privacy-Preserving Ciphertext Retrieval Scheme Based on Edge Computing 基于边缘计算的轻量级隐私保护密文检索方案
IF 5.3 2区 计算机科学
IEEE Transactions on Cloud Computing Pub Date : 2024-09-16 DOI: 10.1109/TCC.2024.3461732
Na Wang;Wen Zhou;Qingyun Han;Jianwei Liu;Weilue Liao;Junsong Fu
{"title":"A Lightweight Privacy-Preserving Ciphertext Retrieval Scheme Based on Edge Computing","authors":"Na Wang;Wen Zhou;Qingyun Han;Jianwei Liu;Weilue Liao;Junsong Fu","doi":"10.1109/TCC.2024.3461732","DOIUrl":"10.1109/TCC.2024.3461732","url":null,"abstract":"With the rapid development of cloud computing and Internet of Things (IoT) technologies, large amounts of data collected from IoT devices are encrypted and outsourced to cloud servers for storage and sharing. However, traditional ciphertext retrieval schemes impose high computation and storage overhead on end users. Meanwhile, IoT devices with limited resources are difficult to adapt to large amounts of data computation and transmission, which leads to transmission delay and poor user experience. In this article, we propose a lightweight privacy-preserving ciphertext retrieval scheme based on edge computing (LPCR) by extending searchable encryption (SE) and ciphertext policy attribute-based encryption (CP-ABE) techniques. First, to avoid network delay and paralysis, we introduce edge servers into LPCR and design a collaboration mechanism between the user side and the edge servers. The user side only needs to accomplish lightweight computation and storage tasks, which greatly reduces their resource consumption. Second, we extend the basic ciphertext policy attribute-based keyword search (CP-ABKS) technique and design the Linear Secret Sharing Scheme (LSSS) access control algorithm with attribute values to hide access policies and attributes. In addition, to improve the retrieval accuracy, the document indexes and query trapdoors are set up by conjunctive keywords to help the cloud server locate exactly the data that the user wishes to query. Formal security analysis verifies that LPCR can achieve the security of chosen plaintext attack (CPA) and chosen keyword attack (CKA), and resist collusion attack. Simulation experiments prove that LPCR is lightweight and feasible.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"12 4","pages":"1273-1290"},"PeriodicalIF":5.3,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142260183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generative Adversarial Privacy for Multimedia Analytics Across the IoT-Edge Continuum 用于跨物联网-边缘连续性多媒体分析的生成对抗式隐私保护
IF 5.3 2区 计算机科学
IEEE Transactions on Cloud Computing Pub Date : 2024-09-12 DOI: 10.1109/TCC.2024.3459789
Xin Wang;Jianhui Lv;Byung-Gyu Kim;Carsten Maple;B. D. Parameshachari;Adam Slowik;Keqin Li
{"title":"Generative Adversarial Privacy for Multimedia Analytics Across the IoT-Edge Continuum","authors":"Xin Wang;Jianhui Lv;Byung-Gyu Kim;Carsten Maple;B. D. Parameshachari;Adam Slowik;Keqin Li","doi":"10.1109/TCC.2024.3459789","DOIUrl":"10.1109/TCC.2024.3459789","url":null,"abstract":"The proliferation of multimedia-enabled IoT devices and edge computing enables a new class of data-intensive applications. However, analyzing the massive volumes of multimedia data presents significant privacy challenges. We propose a novel framework called generative adversarial privacy (GAP) that leverages generative adversarial networks (GANs) to synthesize privacy-preserving surrogate data for multimedia analytics across the IoT-Edge continuum. GAP carefully perturbs the GAN's training process to provide rigorous differential privacy guarantees without compromising utility. Moreover, we present optimization strategies, including dynamic privacy budget allocation, adaptive gradient clipping, and weight clustering to improve convergence and data quality under a constrained privacy budget. Theoretical analysis proves that GAP provides rigorous privacy protections while enabling high-fidelity analytics. Extensive experiments on real-world multimedia datasets demonstrate that GAP outperforms existing methods, producing high-quality synthetic data for privacy-preserving multimedia processing in diverse IoT-Edge applications.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"12 4","pages":"1260-1272"},"PeriodicalIF":5.3,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142218868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Corrections to “DNN Surgery: Accelerating DNN Inference on the Edge through Layer Partitioning” DNN Surgery:通过层划分加速边缘 DNN 推断" Correct to "DNN Surgery: Accelerating DNN Inference on the Edge through Layer Partitioning"
IF 5.3 2区 计算机科学
IEEE Transactions on Cloud Computing Pub Date : 2024-09-05 DOI: 10.1109/TCC.2024.3404548
Huanghuang Liang;Qianlong Sang;Chuang Hu;Dazhao Cheng;Xiaobo Zhou;Dan Wang;Wei Bao;Yu Wang
{"title":"Corrections to “DNN Surgery: Accelerating DNN Inference on the Edge through Layer Partitioning”","authors":"Huanghuang Liang;Qianlong Sang;Chuang Hu;Dazhao Cheng;Xiaobo Zhou;Dan Wang;Wei Bao;Yu Wang","doi":"10.1109/TCC.2024.3404548","DOIUrl":"https://doi.org/10.1109/TCC.2024.3404548","url":null,"abstract":"In this paper, we reference the previous conference version and complete the grant number mentioned in the acknowledgments of the conference version.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"12 3","pages":"966-966"},"PeriodicalIF":5.3,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10666943","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142143671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FedPAW: Federated Learning With Personalized Aggregation Weights for Urban Vehicle Speed Prediction FedPAW:利用个性化聚合权重进行联合学习,用于城市车辆速度预测
IF 5.3 2区 计算机科学
IEEE Transactions on Cloud Computing Pub Date : 2024-09-02 DOI: 10.1109/TCC.2024.3452696
Yuepeng He;Pengzhan Zhou;Yijun Zhai;Fang Qu;Zhida Qin;Mingyan Li;Songtao Guo
{"title":"FedPAW: Federated Learning With Personalized Aggregation Weights for Urban Vehicle Speed Prediction","authors":"Yuepeng He;Pengzhan Zhou;Yijun Zhai;Fang Qu;Zhida Qin;Mingyan Li;Songtao Guo","doi":"10.1109/TCC.2024.3452696","DOIUrl":"10.1109/TCC.2024.3452696","url":null,"abstract":"Vehicle speed prediction is crucial for intelligent transportation systems, promoting more reliable autonomous driving by accurately predicting future vehicle conditions. Due to variations in drivers’ driving styles and vehicle types, speed predictions for different target vehicles may significantly differ. Existing methods may not realize personalized vehicle speed prediction while protecting drivers’ data privacy. We propose a Federated learning framework with Personalized Aggregation Weights (FedPAW) to overcome these challenges. This method captures client-specific information by measuring the weighted mean squared error between the parameters of local models and global models. The server sends tailored aggregated models to clients instead of a single global model, without incurring additional computational and communication overhead for clients. To evaluate the effectiveness of FedPAW, we collected driving data in urban scenarios using the autonomous driving simulator CARLA, employing an LSTM-based Seq2Seq model with a multi-head attention mechanism to predict the future speed of target vehicles. The results demonstrate that our proposed FedPAW ranks lowest in prediction error within the time horizon of 10 seconds, with a 0.8% reduction in test MAE, compared to eleven representative benchmark baselines.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"12 4","pages":"1248-1259"},"PeriodicalIF":5.3,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142218871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信