Scalable Computing-Practice and Experience最新文献

筛选
英文 中文
ICFD: An Incremental Learning Method Based on Data Feature Distribution 一种基于数据特征分布的增量学习方法
IF 1.1
Scalable Computing-Practice and Experience Pub Date : 2022-12-01 DOI: 10.1109/SmartWorld-UIC-ATC-ScalCom-DigitalTwin-PriComp-Metaverse56740.2022.00103
Yunzhe Zhu, Yusong Tan, Xiaoling Li, Qingbo Wu, Xueqin Ning
{"title":"ICFD: An Incremental Learning Method Based on Data Feature Distribution","authors":"Yunzhe Zhu, Yusong Tan, Xiaoling Li, Qingbo Wu, Xueqin Ning","doi":"10.1109/SmartWorld-UIC-ATC-ScalCom-DigitalTwin-PriComp-Metaverse56740.2022.00103","DOIUrl":"https://doi.org/10.1109/SmartWorld-UIC-ATC-ScalCom-DigitalTwin-PriComp-Metaverse56740.2022.00103","url":null,"abstract":"Neural network models have achieved great success in numerous disciplines in recent years, including image segmentation, object identification, and natural language processing (NLP). Incremental learning in these fields focuses on training models in a continuous data stream. As time goes by, more new data becomes available, and old data may become unavailable owing to resource constraints such as storage. As a result, when new data is continually arriving, the performance of the neural network model on the old data sample sometimes decreases significantly, a phenomenon known as catastrophic forgetting. Many corresponding strategies have been proposed to mitigate the catastrophic forgetting of neural network models, which are based on parameter regularization, data replay, and parameter isolation. This paper proposes an incremental learning method based on data feature distribution (ICFD). The method uses Gaussian distribution to generate features from old data to train neural network models based on the phenomenon that feature vectors obey multi-dimensional Gaussian distribution in feature space. This method avoids storing a large number of original samples, and the generated old class features contain more sample information. This method combines data playback and parameter regularization in concrete implementation. The experimental results of ICFD on the CIFAR-100 demonstrate that when the incremental step is 5, the average incremental accuracy is increased by 10.4%. When the incremental step is 10, the average incremental accuracy is improved by 8.1%.","PeriodicalId":43791,"journal":{"name":"Scalable Computing-Practice and Experience","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85817338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MlpE: Knowledge Graph Embedding with Multilayer Perceptron Networks MlpE:基于多层感知机网络的知识图嵌入
IF 1.1
Scalable Computing-Practice and Experience Pub Date : 2022-12-01 DOI: 10.1109/SmartWorld-UIC-ATC-ScalCom-DigitalTwin-PriComp-Metaverse56740.2022.00130
Qing Xu, Kaijun Ren, Xiaoli Ren, Shuibing Long, Xiaoyong Li
{"title":"MlpE: Knowledge Graph Embedding with Multilayer Perceptron Networks","authors":"Qing Xu, Kaijun Ren, Xiaoli Ren, Shuibing Long, Xiaoyong Li","doi":"10.1109/SmartWorld-UIC-ATC-ScalCom-DigitalTwin-PriComp-Metaverse56740.2022.00130","DOIUrl":"https://doi.org/10.1109/SmartWorld-UIC-ATC-ScalCom-DigitalTwin-PriComp-Metaverse56740.2022.00130","url":null,"abstract":"Knowledge graph embedding (KGE) is an efficient method to predict missing links in knowledge graphs. Most KGE models based on convolutional neural networks have been designed for improving the ability of capturing interaction. Although these models work well, they suffered from the limited receptive field of the convolution kernel, which lead to the lack of ability to capture long-distance interactions. In this paper, we firstly illustrate the interactions between entities and relations and discuss its effect in KGE models by experiments, and then propose MlpE, which is a fully connected network with only three layers. MlpE aims to capture long-distance interactions to improve the performance of link prediction. Extensive experimental evaluations on four typical datasets WN18RR, FB15k-237, DB100k and YAGO3-10 have shown the superority of MlpE, especially in some cases MlpE can achieve the better performance with less parameters than the state-of-the-art convolution-based KGE model.","PeriodicalId":43791,"journal":{"name":"Scalable Computing-Practice and Experience","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84089415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Attack-Model-Agnostic Defense Against Model Poisonings in Distributed Learning 分布式学习中模型中毒的攻击-模型不可知防御
IF 1.1
Scalable Computing-Practice and Experience Pub Date : 2022-12-01 DOI: 10.1109/SmartWorld-UIC-ATC-ScalCom-DigitalTwin-PriComp-Metaverse56740.2022.00354
Hairuo Xu, Tao Shu
{"title":"Attack-Model-Agnostic Defense Against Model Poisonings in Distributed Learning","authors":"Hairuo Xu, Tao Shu","doi":"10.1109/SmartWorld-UIC-ATC-ScalCom-DigitalTwin-PriComp-Metaverse56740.2022.00354","DOIUrl":"https://doi.org/10.1109/SmartWorld-UIC-ATC-ScalCom-DigitalTwin-PriComp-Metaverse56740.2022.00354","url":null,"abstract":"The distributed nature of distributed learning renders the learning process susceptible to model poisoning attacks. Most existing countermeasures are designed based on a presumed attack model, and can only perform under the presumed attack model. However, in reality a distributed learning system typically does not have the luxury of knowing the attack model it is going to be actually facing in its operation when the learning system is deployed, thus constituting a zero-day vulnerability of the system that has been largely overlooked so far. In this paper, we study the attack-model-agnostic defense mechanisms for distributed learning, which are capable of countering a wide-spectrum of model poisoning attacks without relying on assumptions of the specific attack model, and hence alleviating the zero-day vulnerability of the system. Extensive experiments are performed to verify the effectiveness of the proposed defense.","PeriodicalId":43791,"journal":{"name":"Scalable Computing-Practice and Experience","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82139024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Intelligent Scoring Method for Sketch Portrait Based on Attention Convolution Neural Network 一种基于注意卷积神经网络的素描肖像智能评分方法
IF 1.1
Scalable Computing-Practice and Experience Pub Date : 2022-12-01 DOI: 10.1109/SmartWorld-UIC-ATC-ScalCom-DigitalTwin-PriComp-Metaverse56740.2022.00156
Shaolong Zheng, Zewei Xu, Zhenni Li, Yihui Cai, Mingyu Han, Yi Ji
{"title":"An Intelligent Scoring Method for Sketch Portrait Based on Attention Convolution Neural Network","authors":"Shaolong Zheng, Zewei Xu, Zhenni Li, Yihui Cai, Mingyu Han, Yi Ji","doi":"10.1109/SmartWorld-UIC-ATC-ScalCom-DigitalTwin-PriComp-Metaverse56740.2022.00156","DOIUrl":"https://doi.org/10.1109/SmartWorld-UIC-ATC-ScalCom-DigitalTwin-PriComp-Metaverse56740.2022.00156","url":null,"abstract":"It is very important for art students to get timely feedback on their paintings. Currently, this work is done by professional teachers. However, it is problematic for the scoring method since the subjectivity of manual scoring and the scarcity of teacher resources. It is time-consuming and expensive to carry out this work in practice. In this paper, we propose a depthwise separable convolutional network with multi-head self-attention module (DCMnet) for developing an intelligent scoring mechanism for sketch portraits. Specifically, to build a lightweight network, we first utilize the depthwise separable convolutional block as the backbone of the model for mining the local features of sketch portraits. Then the attention module is employed to notice global dependencies within internal representations of portraits. Finally, we use DCMnet to build a scoring framework, which first divides the works into four score levels, and then subdivides them into eight grades: below 60, 60-64, 65-69, 70-74, 75-79, 80-84, 85-89, and above 90. Each grade of work is given a basic score, and the final score of works is composed of the basic score and the mood factor. In the training process, a pretraining strategy is introduced for fast convergence. For verifying our method, we collect a sketch portrait dataset in the Guangdong Fine Arts Joint Examination to train the DCMnet. The experimental results demonstrate that the proposed method achieves excellent accuracy at each grade and the efficiency of scoring is improved.","PeriodicalId":43791,"journal":{"name":"Scalable Computing-Practice and Experience","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82515561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust Spatio-Temporal Trajectory Modeling Based on Auto-Gated Recurrent Unit 基于自控循环单元的鲁棒时空轨迹建模
IF 1.1
Scalable Computing-Practice and Experience Pub Date : 2022-12-01 DOI: 10.1109/smartworld-uic-atc-scalcom-digitaltwin-pricomp-metaverse56740.2022.00176
Jia Jia, Xiaoyong Li, Ximing Li, Linghui Li, Jie Yuan, Hongmiao Wang, Yali Gao, Pengfei Qiu, Jialu Tang
{"title":"Robust Spatio-Temporal Trajectory Modeling Based on Auto-Gated Recurrent Unit","authors":"Jia Jia, Xiaoyong Li, Ximing Li, Linghui Li, Jie Yuan, Hongmiao Wang, Yali Gao, Pengfei Qiu, Jialu Tang","doi":"10.1109/smartworld-uic-atc-scalcom-digitaltwin-pricomp-metaverse56740.2022.00176","DOIUrl":"https://doi.org/10.1109/smartworld-uic-atc-scalcom-digitaltwin-pricomp-metaverse56740.2022.00176","url":null,"abstract":"With the huge amount of crowd mobility data generated by the explosion of mobile devices, deep neural networks (DNNs) are applied to trajectory data mining and modeling, which make great progresses in those scenarios. However, recent studies have demonstrated that DNNs are highly vulnerable to adversarial examples which are crafted by adding subtle, imperceptible noise to normal examples, and leading to the wrong prediction with high confidence. To improve the robustness of modeling spatiotemporal trajectories via DNNs, we propose a collaborative learning model named “Auto-GRU”, which consists of an autoencoder-based self-representation network (SRN) for robust trajectory feature learning and gated recurrent unit (GRU)-based classification network which shares information with SRN for collaborative learning and strictly defending adversarial examples. Our proposed method performs well in defending both white and black box attacks, especially in black-box attacks, where the performance outperforms state-of-the-art methods. Moreover, extensive experiments on Geolife and Beijing taxi traces datasets demonstrate that the proposed model can improve the robustness against adversarial examples without a significant performance penalty on clean examples.","PeriodicalId":43791,"journal":{"name":"Scalable Computing-Practice and Experience","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82587723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TGNRec: Recommendation Based on Trust Networks and Graph Neural Networks TGNRec:基于信任网络和图神经网络的推荐
IF 1.1
Scalable Computing-Practice and Experience Pub Date : 2022-12-01 DOI: 10.1109/SmartWorld-UIC-ATC-ScalCom-DigitalTwin-PriComp-Metaverse56740.2022.00274
Ting Li, Chundong Wang, Huai-bin Wang
{"title":"TGNRec: Recommendation Based on Trust Networks and Graph Neural Networks","authors":"Ting Li, Chundong Wang, Huai-bin Wang","doi":"10.1109/SmartWorld-UIC-ATC-ScalCom-DigitalTwin-PriComp-Metaverse56740.2022.00274","DOIUrl":"https://doi.org/10.1109/SmartWorld-UIC-ATC-ScalCom-DigitalTwin-PriComp-Metaverse56740.2022.00274","url":null,"abstract":"In recent years, user-user trust relationships have played an important role in recommendation based on graph neural networks(GNNs). However, existing studies based on GNNs still face the following challenges: how to obtain more rating information of users’ trust from trust networks when using GNNs to learn the user latent feature. And how to effectively mine items’ relationships from the recommended data so that GNNs can better learn the item latent feature. To address the above challenges, in this paper, we propose a new model called TGNRec that accomplishes recommendation based on trust networks and graph neural networks. TGNRec consists of three modules: User Spatial Module, Item Spatial Module, Prediction Module. User Spatial Module considers both the rating information of users’ direct and indirect trust based on the transfer properties of trust relationships in trust networks. It mainly learns the user latent feature using user-item interactions and user-user trust relationships. Item Spatial Module establishes items’ similarity relationships based on the rating mean, which helps GNNs learn the item latent feature from user-item interactions and item-item relationships. Prediction Module realizes users’ rating prediction for unrated items by aggregating User Spatial Module and Item Spatial Module. At last, we conduct experiments on two real-world datasets, Film Trust and Ciao-DVD. The experimental results demonstrate the effectiveness of TGNRec for rating prediction in recommendation.","PeriodicalId":43791,"journal":{"name":"Scalable Computing-Practice and Experience","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78854807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Aspect-Level Sentiment Classification Based on Self-Attention Routing via Capsule Network 基于胶囊网络自注意路由的方面级情感分类
IF 1.1
Scalable Computing-Practice and Experience Pub Date : 2022-12-01 DOI: 10.1109/SmartWorld-UIC-ATC-ScalCom-DigitalTwin-PriComp-Metaverse56740.2022.00280
Chang Liu, Jianxia Chen, Tianci Wang, Qi Liu, Xinyun Wu, Lei Mao
{"title":"Aspect-Level Sentiment Classification Based on Self-Attention Routing via Capsule Network","authors":"Chang Liu, Jianxia Chen, Tianci Wang, Qi Liu, Xinyun Wu, Lei Mao","doi":"10.1109/SmartWorld-UIC-ATC-ScalCom-DigitalTwin-PriComp-Metaverse56740.2022.00280","DOIUrl":"https://doi.org/10.1109/SmartWorld-UIC-ATC-ScalCom-DigitalTwin-PriComp-Metaverse56740.2022.00280","url":null,"abstract":"Aspect-level sentiment classification task aims at determining the sentiment polarity towards each aspect in a sentence. Although existing models have achieved remarkable performance, they always ignore the semantic relationship between aspects and their context, resulting in the lack of syntax information and aspect features. Therefore, the paper proposes a novel model named ASC based on the Self-Attention routing combined with the Position-biased weight approach, ASC-SAP in short. First, the paper utilizes the position-biased weight approach to construct an aspect-enhanced embedding. Furthermore, the paper develops a novel non-iterative but highly parallelized self-attention routing mechanism to efficiently transfer the aspect features to the target capsules. In addition, the paper utilizes pre-trained model bidirectional encoder representation from transformers (BERT). Comprehensive experiments show that our model achieves excellent performance on Twitter and SemEval2014 benchmarks and verify the effectiveness of our models.","PeriodicalId":43791,"journal":{"name":"Scalable Computing-Practice and Experience","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87015073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
APR-ES: Adaptive Penalty-Reward Based Evolution Strategy for Deep Reinforcement Learning 深度强化学习的自适应奖惩进化策略
IF 1.1
Scalable Computing-Practice and Experience Pub Date : 2022-12-01 DOI: 10.1109/SmartWorld-UIC-ATC-ScalCom-DigitalTwin-PriComp-Metaverse56740.2022.00079
Dongdong Wang, Siyang Lu, Xiang Wei, Mingquan Wang, Yandong Li, Liqiang Wang
{"title":"APR-ES: Adaptive Penalty-Reward Based Evolution Strategy for Deep Reinforcement Learning","authors":"Dongdong Wang, Siyang Lu, Xiang Wei, Mingquan Wang, Yandong Li, Liqiang Wang","doi":"10.1109/SmartWorld-UIC-ATC-ScalCom-DigitalTwin-PriComp-Metaverse56740.2022.00079","DOIUrl":"https://doi.org/10.1109/SmartWorld-UIC-ATC-ScalCom-DigitalTwin-PriComp-Metaverse56740.2022.00079","url":null,"abstract":"As a black-box optimization approach, derivative-free evolution strategy (ES) draws lots of attention in virtue of its low sensitivity and high scalability. It rivals Markov Decision Process based reinforcement learning or even can more efficiently improve rewards under complex scenarios. However, existing derivative-free ES still confronts slow convergence speed at the early training stage and limited exploration at the late convergence stage. Inspired from human learning process, we propose a new scheme extended from ES by taking advantage of prior knowledge to guide ES, thus accelerating early exploitation process and improving later exploration ability. At early training stage, Drift-Plus-Penalty (DPP), a penalty-based optimization scheme, is reformulated to boost penalty learning and reduce regrets. Along with DPP-directed evolution, reward learning with Thompson sampling (TS) is increasingly enhanced to explore global optima at late training stage. This scheme is justified with extensive experiments from a variety of benchmarks, including numerical problems, physics environments, and games. By virtue of its imitation of human learning process, this scheme outperforms state-of-the-art ES on the benchmarks by a large margin.","PeriodicalId":43791,"journal":{"name":"Scalable Computing-Practice and Experience","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88379860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-distilled Named Entity Recognition Based on Boundary Detection and Biaffine Attention 基于边界检测和双碱注意的自蒸馏命名实体识别
IF 1.1
Scalable Computing-Practice and Experience Pub Date : 2022-12-01 DOI: 10.1109/smartworld-uic-atc-scalcom-digitaltwin-pricomp-metaverse56740.2022.00162
Yong Song, Zhiwei Yan, Yukun Qin, Xiaozhou Ye, Ye Ouyang
{"title":"Self-distilled Named Entity Recognition Based on Boundary Detection and Biaffine Attention","authors":"Yong Song, Zhiwei Yan, Yukun Qin, Xiaozhou Ye, Ye Ouyang","doi":"10.1109/smartworld-uic-atc-scalcom-digitaltwin-pricomp-metaverse56740.2022.00162","DOIUrl":"https://doi.org/10.1109/smartworld-uic-atc-scalcom-digitaltwin-pricomp-metaverse56740.2022.00162","url":null,"abstract":"Named Entity Recognition (NER) is an important down-streaming task in natural language processing. Span-based methods are applicable to both flat and nested entities. However, they lack explicit boundary supervision. To address this issue, we propose a multi-task and self-distilled model which combines biaffine span classification and entity boundary detection tasks. Firstly, the boundary detection and biaffine span classification models are jointly trained under a multi-task learning framework to address the problem of lacking supervision of boundaries. Then, self-distillation technique is applied on the model to reassign entity probabilities from annotated spans to surrounding spans and more entity types, further improving the accuracy of NER by soft labels that contain richer knowledge. Experiments were based on a high-density entity text dataset of the commodity titles from an e-commerce company. Finally, the experimental results show that our model exhibited a better F1 score than the existing common models.","PeriodicalId":43791,"journal":{"name":"Scalable Computing-Practice and Experience","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90184901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FedGPS: Personalized Cross-Silo Federated Learning for Internet of Things-enabled Predictive Maintenance FedGPS:面向物联网预测性维护的个性化跨筒仓联邦学习
IF 1.1
Scalable Computing-Practice and Experience Pub Date : 2022-12-01 DOI: 10.1109/SmartWorld-UIC-ATC-ScalCom-DigitalTwin-PriComp-Metaverse56740.2022.00137
Yuchen Jiang, Chang Ji
{"title":"FedGPS: Personalized Cross-Silo Federated Learning for Internet of Things-enabled Predictive Maintenance","authors":"Yuchen Jiang, Chang Ji","doi":"10.1109/SmartWorld-UIC-ATC-ScalCom-DigitalTwin-PriComp-Metaverse56740.2022.00137","DOIUrl":"https://doi.org/10.1109/SmartWorld-UIC-ATC-ScalCom-DigitalTwin-PriComp-Metaverse56740.2022.00137","url":null,"abstract":"Predictive maintenance (PdM) has entered into a new era adopting artificial intelligence and Internet-of-Things (IoT) technologies. It is necessary for a manufacturing company to collaborate with other clients using IoT-captured production data. However, training models in a cross-silo manner is still challenging when considering data privacy. In order to tackle these challenges, a personalized cross-silo federated learning mechanism named federated global partners searching (FedGPS) is proposed. Firstly, model parameters for the participating clients are encrypted and uploaded to the central server as input. Next, FedGPS automatically determines the collaboration degrees between clients based on data distribution. After that, personalized model updates are sent back to the clients. Finally, each client conducts local updating after data decryption. The effectiveness of the FedGPS is verified in real-world cases and our method achieves 92.35% Accuracy, 98.55% Precision, 92.90% Recall, and 95.27% F1-Score comparing with other existing models from the literature.","PeriodicalId":43791,"journal":{"name":"Scalable Computing-Practice and Experience","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90350608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信