Cluster Computing最新文献

筛选
英文 中文
Secure speech-recognition data transfer in the internet of things using a power system and a tried-and-true key generation technique 利用电力系统和久经考验的密钥生成技术确保物联网中语音识别数据传输的安全性
Cluster Computing Pub Date : 2024-07-29 DOI: 10.1007/s10586-024-04649-3
Zhe Wang, Shuangbai He, Guoan Li
{"title":"Secure speech-recognition data transfer in the internet of things using a power system and a tried-and-true key generation technique","authors":"Zhe Wang, Shuangbai He, Guoan Li","doi":"10.1007/s10586-024-04649-3","DOIUrl":"https://doi.org/10.1007/s10586-024-04649-3","url":null,"abstract":"<p>To secure the privacy, confidentiality, and integrity of Speech Data (SD), the concept of secure Speech Recognition (SR) involves accurately recording and comprehending spoken language while employing diverse security processes. As the Internet of Things (IoT) rapidly evolves, the integration of SR capabilities into IoT devices gains significance. However, ensuring the security and privacy of private SD post-integration remains a critical concern. Despite the potential benefits, implementing the proposed Reptile Search Optimized Hidden Markov Model (RSO-HMM) for SR and integrating it with IoT devices may encounter complexities due to diverse device types. Moreover, the challenge of maintaining data security and privacy for assigned SD in practical IoT settings poses a significant hurdle. Ensuring seamless interoperability and robust security measures is essential. We introduce the Reptile Search Optimized Hidden Markov Model (RSO-HMM) for SR, utilizing retrieved aspects as speech data. Gathering a diverse range of SD from speakers with varying linguistic backgrounds enhances the accuracy of the SR system. Preprocessing involves Z-score normalization for robustness and mitigation of outlier effects. The Perceptual Linear Prediction (PLP) technique facilitates efficient extraction of essential acoustic data from speech sources. Addressing data security, Elliptic Curve Cryptography (ECC) is employed for encryption, particularly suited for resource-constrained scenarios. Our study evaluates the SR system, employing key performance metrics including accuracy, precision, recall, and F1 score. The thorough assessment demonstrates the system's remarkable performance, achieving an impressive accuracy of 96%. The primary objective revolves around appraising the system's capacity and dependability in accurately transcribing speech signals. By proposing a comprehensive approach that combines the RSO-HMM for SR, data preprocessing techniques, and ECC encryption, this study advocates for the wider adoption of SR technology within the IoT ecosystem. By tackling critical data security concerns, this approach paves the way for a safer and more efficient globally interconnected society, encouraging the broader utilization of SR technology in various applications.</p>","PeriodicalId":501576,"journal":{"name":"Cluster Computing","volume":"74 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141868394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DDoS attack detection techniques in IoT networks: a survey 物联网网络中的 DDoS 攻击检测技术:一项调查
Cluster Computing Pub Date : 2024-07-26 DOI: 10.1007/s10586-024-04662-6
Amir Pakmehr, Andreas Aßmuth, Negar Taheri, Ali Ghaffari
{"title":"DDoS attack detection techniques in IoT networks: a survey","authors":"Amir Pakmehr, Andreas Aßmuth, Negar Taheri, Ali Ghaffari","doi":"10.1007/s10586-024-04662-6","DOIUrl":"https://doi.org/10.1007/s10586-024-04662-6","url":null,"abstract":"<p>The Internet of Things (IoT) is a rapidly emerging technology that has become more valuable and vital in our daily lives. This technology enables connection and communication between objects and devices and allows these objects to exchange information and perform intelligent operations with each other. However, due to the scale of the network, the heterogeneity of the network, the insecurity of many of these devices, and privacy protection, it faces several challenges. In the last decade, distributed DDoS attacks in IoT networks have become one of the growing challenges that require serious attention and investigation. DDoS attacks take advantage of the limited resources available on IoT devices, which disrupts the functionality of IoT-connected applications and services. This article comprehensively examines the effects of DDoS attacks in the context of the IoT, which cause significant harm to existing systems. Also, this paper investigates several solutions to identify and deal with this type of attack. Finally, this study suggests a broad line of research in the field of IoT security, dedicated to examining how to adapt to current challenges and predicting future trends.</p>","PeriodicalId":501576,"journal":{"name":"Cluster Computing","volume":"4 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141771523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing image encryption using chaotic maps: a multi-map approach for robust security and performance optimization 利用混沌映射增强图像加密:实现稳健安全和性能优化的多映射方法
Cluster Computing Pub Date : 2024-07-26 DOI: 10.1007/s10586-024-04672-4
Mostafa Abodawood, Abeer Twakol Khalil, Hanan M. Amer, Mohamed Maher Ata
{"title":"Enhancing image encryption using chaotic maps: a multi-map approach for robust security and performance optimization","authors":"Mostafa Abodawood, Abeer Twakol Khalil, Hanan M. Amer, Mohamed Maher Ata","doi":"10.1007/s10586-024-04672-4","DOIUrl":"https://doi.org/10.1007/s10586-024-04672-4","url":null,"abstract":"<p>This paper proposes a model for encrypted images that depend on chaotic maps. This scheme uses eight chaotic maps to perform the encryption process: Logistic, Gauss, Circle, Sine, Singer, Piecewise, Tent, and Chebyshev. The two major processes of the suggested model are chaotic confusion and pixel diffusion. Chaotic maps are used to permute the pixel positions during the confusion process. In the diffusion process, the value of the image pixel is changed. To evaluate the suggested model, some performance metrics were used, such as execution time, peak signal-to-noise ratio, entropy, key sensitivity, noise attack, the number of pixels change rate (NPCR), unified average changing intensity (UACI), histogram analysis, and cross-correlation. According to experimental analysis, images encrypted with the suggested system have correlation coefficient values that are almost zero, NPCR of 99.6%, UACI of 32.9%, the key space of 10^(80), the histogram analysis showed that the encrypted images have almost similar pixels, an execution time of 0.1563 ms, the, and entropy of 7.9973. All prior results have verified the robustness and efficiency of the suggested algorithm.</p>","PeriodicalId":501576,"journal":{"name":"Cluster Computing","volume":"27 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141771749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Empowering bonobo optimizer for global optimization and cloud scheduling problem 为全局优化和云调度问题赋能的 bonobo 优化器
Cluster Computing Pub Date : 2024-07-24 DOI: 10.1007/s10586-024-04671-5
Reham R. Mostafa, Fatma A. Hashim, Amit Chhabra, Ghaith Manita, Yaning Xiao
{"title":"Empowering bonobo optimizer for global optimization and cloud scheduling problem","authors":"Reham R. Mostafa, Fatma A. Hashim, Amit Chhabra, Ghaith Manita, Yaning Xiao","doi":"10.1007/s10586-024-04671-5","DOIUrl":"https://doi.org/10.1007/s10586-024-04671-5","url":null,"abstract":"<p>Task scheduling in cloud computing systems is an important and challenging NP-Hard problem that involves the decision to allocate resources to tasks in a way that optimizes a performance metric. The complexity of this problem rises due to the size and scale of cloud systems, the heterogeneity of cloud resources and tasks, and the dynamic nature of cloud resources. Metaheuristics are a class of algorithms that have been used effectively to solve NP-Hard cloud scheduling problems (CSP). Bonobo optimizer (BO) is a recent metaheuristic-based optimization algorithm, which mimics several interesting reproductive strategies and social behaviour of Bonobos and has shown competitive performance against several state-of-the-art metaheuristics for many optimization problems. Besides its good performance, it still suffers from inherent deficiencies such as imbalanced exploration-exploitation and trapping in local optima. This paper proposes a modified version of the BO algorithm called mBO to solve the cloud scheduling problem to minimize two important scheduling objectives; makespan and energy consumption. We have incorporated four modifications namely Dimension Learning-Based Hunting (DLH) search strategy, (2) Transition Factor (TF), (3) Control Randomization (DR), and 4) Control Randomization Direction in the traditional BO to improve the performance, which helps it to escape local optima and balance exploration-exploitation. The efficacy of mBO is initially tested on the popular standard CEC’20 benchmarks followed by its application on the CSP problem using real-world supercomputing workloads namely CEA-Curie and HPC2N. Results and observations reveal the supremacy of the proposed mBO algorithm over many contemporary metaheuristics by a competitive margin for both CEC’20 benchmarks and the CSP problem. Quantitatively for the CSP problem, mBO was able to reduce makespan and energy consumption by 8.20–23.73% and 2.57–11.87%, respectively against tested algorithms. For HPC2N workloads, mBO achieved a makespan reduction of 10.99–29.48% and an energy consumption reduction of 3.55–30.65% over the compared metaheuristics.</p>","PeriodicalId":501576,"journal":{"name":"Cluster Computing","volume":"40 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141771751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Vehicle edge server deployment based on reinforcement learning in cloud-edge collaborative environment 云边缘协作环境中基于强化学习的车辆边缘服务器部署
Cluster Computing Pub Date : 2024-07-24 DOI: 10.1007/s10586-024-04659-1
Feiyan Guo, Bing Tang, Ying Wang, Xiaoqing Luo
{"title":"Vehicle edge server deployment based on reinforcement learning in cloud-edge collaborative environment","authors":"Feiyan Guo, Bing Tang, Ying Wang, Xiaoqing Luo","doi":"10.1007/s10586-024-04659-1","DOIUrl":"https://doi.org/10.1007/s10586-024-04659-1","url":null,"abstract":"<p>The rapid development of Internet of Vehicles (IoV) technology has led to a sharp increase in vehicle data. Traditional cloud computing is no longer sufficient to meet the high bandwidth and low latency requirements of IoV tasks. Ensuring the service quality of applications on in-vehicle devices has become challenging. Edge computing technology moves computing tasks from the cloud to edge servers with sufficient computing resources, effectively reducing network congestion and data propagation latency. The integration of edge computing and IoV technology is an effective approach to realizing intelligent applications in IoV.This paper investigates the deployment of vehicle edge servers in cloud-edge collaborative environment. Taking into consideration the vehicular mobility and the computational demands of IoV applications, the vehicular edge server deployment within the cloud-edge collaborative framework is formulated as a multi-objective optimization problem. This problem aims to achieve two primary objectives: minimizing service access latency and balancing server workload. To address this problem, a model is established for optimizing the deployment of vehicle edge servers and a deployment approach named VSPR is proposed. This method integrates hierarchical clustering and reinforcement learning techniques to effectively achieve the desired multi-objective optimization. Experiments are conducted using a real datasets from Shanghai Telecom to comprehensively evaluate the performance of workload balance and service access latency of vehicle edge servers under different deploy methods. Experimental results demonstrate that VSPR achieves an optimized balance between low latency and workload balancing while ensuring service quality, and outperforms SRL, CQP, K-means and Random algorithm by 4.76%, 44.59%, 40.78% and 69.33%, respectively.</p>","PeriodicalId":501576,"journal":{"name":"Cluster Computing","volume":"26 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141771750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Migration of containers on the basis of load prediction with dynamic inertia weight based PSO algorithm 利用基于动态惯性权的 PSO 算法,根据载荷预测迁移集装箱
Cluster Computing Pub Date : 2024-07-24 DOI: 10.1007/s10586-024-04676-0
Shabnam Bawa, Prashant Singh Rana, RajKumar Tekchandani
{"title":"Migration of containers on the basis of load prediction with dynamic inertia weight based PSO algorithm","authors":"Shabnam Bawa, Prashant Singh Rana, RajKumar Tekchandani","doi":"10.1007/s10586-024-04676-0","DOIUrl":"https://doi.org/10.1007/s10586-024-04676-0","url":null,"abstract":"<p>Due to the necessity of virtualization in a fog environment with limited resources, service providers are challenged to reduce the energy consumption of hosts. The consolidation of virtual machines (VMs) has led to a significant amount of research into the effective management of energy usage. Due to their high computational overhead, the existing virtualization techniques may not be suited to minimize the energy consumption of fog devices. As containers have recently gained popularity for encapsulating fog services, they are an ideal candidate for addressing this issue, particularly for fog devices. In the proposed work, an ensemble model is used for load prediction on hosts to classify them as overloaded, underloaded, or balanced. A container selection algorithm identifies containers for migration when a host becomes overloaded. Additionally, an energy-efficient container migration strategy facilitated by a dynamic inertia weight-based particle swarm optimization (DIWPSO) algorithm is introduced to meet resource demands. This approach entails migrating containers from overloaded hosts to others in order to balance the load and reduce the energy consumption of hosts located on fog nodes. Experimental results demonstrate that load balancing can be achieved at a lower migration cost. The proposed DIWPSO algorithm significantly reduces energy consumption by 10.89% through container migration. Moreover, compared to meta-heuristic solutions such as PSO, ABC (Artificial Bee Colony), and E-ABC (Enhanced Artificial Bee Colony), the proposed DIWPSO algorithm shows superior performance across various evaluation parameters.</p>","PeriodicalId":501576,"journal":{"name":"Cluster Computing","volume":"46 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141785163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A hybrid butterfly and Newton–Raphson swarm intelligence algorithm based on opposition-based learning 基于对立学习的蝴蝶和牛顿-拉夫逊混合群智能算法
Cluster Computing Pub Date : 2024-07-21 DOI: 10.1007/s10586-024-04678-y
Chuan Li, Yanjie Zhu
{"title":"A hybrid butterfly and Newton–Raphson swarm intelligence algorithm based on opposition-based learning","authors":"Chuan Li, Yanjie Zhu","doi":"10.1007/s10586-024-04678-y","DOIUrl":"https://doi.org/10.1007/s10586-024-04678-y","url":null,"abstract":"<p>In response to the issues of local optima entrapment, slow convergence, and low optimization accuracy in Butterfly optimization algorithm (BOA), this paper proposes a hybrid Butterfly and Newton–Raphson swarm intelligence algorithm based on Opposition-based learning (BOANRBO). Firstly, by Opposition-based learning, the initialization strategy of the butterfly algorithm is improved to accelerate convergence. Secondly, adaptive perception modal factors are introduced into the original butterfly algorithm, controlling the adjustment rate through the adjustment factor α to enhance the algorithm's global search capability. Then, the exploration probability <span>(p)</span> is dynamically adjusted based on the algorithm's runtime, increasing or decreasing exploration probability by examining changes in fitness to achieve a balance between exploration and exploitation. Finally, the exploration capability of BOA is enhanced by incorporating the Newton–Raphson-based optimizer (NRBO) to help BOA avoid local optima traps. The optimization performance of BOANRBO is evaluated on 65 standard benchmark functions from CEC-2005, CEC-2017, and CEC-2022, and the obtained optimization results are compared with the performance of 17 other well-known algorithms. Simulation results indicate that in the 12 test functions of CEC-2022, the BOANRBO algorithm achieved 8 optimal results (66.7%). In CEC-2017, out of 30 test functions, it obtained 27 optimal results (90%). In CEC-2005, among 23 test functions, it secured 22 optimal results (95.6%). Additionally, experiments have validated the algorithm’s practicality and superior performance in 5 engineering design optimization problems and 2 real-world problems.</p>","PeriodicalId":501576,"journal":{"name":"Cluster Computing","volume":"94 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141745968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel approach for energy consumption management in cloud centers based on adaptive fuzzy neural systems 基于自适应模糊神经系统的云中心能耗管理新方法
Cluster Computing Pub Date : 2024-07-21 DOI: 10.1007/s10586-024-04665-3
Hong Huang, Yu Wang, Yue Cai, Hong Wang
{"title":"A novel approach for energy consumption management in cloud centers based on adaptive fuzzy neural systems","authors":"Hong Huang, Yu Wang, Yue Cai, Hong Wang","doi":"10.1007/s10586-024-04665-3","DOIUrl":"https://doi.org/10.1007/s10586-024-04665-3","url":null,"abstract":"<p>Cloud computing enables global access to tool-based IT services, accommodating a wide range of applications across consumer, scientific, and commercial sectors, operating on a pay-per-use model. However, the substantial energy consumption of data centers hosting cloud applications leads to significant operational costs and environmental impact due to carbon emissions. Each day, these centers handle numerous requests from diverse users, necessitating powerful servers that consume substantial energy and associated peripherals. Efficient resource utilization is essential for mitigating energy consumption in cloud centers. In our research, we adopted a novel hybrid approach to dynamically allocate resources in the cloud, focusing on energy reduction and load prediction. Specifically, we employed neural fuzzy systems for load prediction and the ant colony optimization algorithm for virtual machine migration. Comparative analysis against existing literature demonstrates the effectiveness of our approach. Across 810 time periods, our method exhibits an average resource loss reduction of 21.3% and a 5.6% lower average request denial rate compared to alternative strategies. Using the PlanetLab workload and the created CloudSim simulator, the suggested methods have been assessed. Moreover, our methodology was validated through comprehensive experiments using the SPECpower benchmark, achieving over 98% accuracy in forecasting energy consumption for the proposed model. These results underscore the practicality and efficiency of our strategy in optimizing cloud resource management while addressing energy efficiency challenges in data center operations.</p>","PeriodicalId":501576,"journal":{"name":"Cluster Computing","volume":"69 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141739079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Localization of try block and generation of catch block to handle exception using an improved LSTM 使用改进型 LSTM 定位 try 块并生成 catch 块以处理异常
Cluster Computing Pub Date : 2024-07-20 DOI: 10.1007/s10586-024-04633-x
Preetesh Purohit, Anuradha Purohit, Vrinda Tokekar
{"title":"Localization of try block and generation of catch block to handle exception using an improved LSTM","authors":"Preetesh Purohit, Anuradha Purohit, Vrinda Tokekar","doi":"10.1007/s10586-024-04633-x","DOIUrl":"https://doi.org/10.1007/s10586-024-04633-x","url":null,"abstract":"<p>Several contemporary programming languages, including Java, have exception management as a crucial built-in feature. By employing try-catch blocks, it enables developers to handle unusual or unexpected conditions that might arise at runtime beforehand. If exception management is neglected or applied improperly, it may result in serious incidents like equipment failure. Exception handling mechanisms are difficult to implement and time expensive with the preceding methodologies. This research introduces an efficient Long Short Term Memory (LSTM) technique for handling the exceptions automatically, which can identify the locations of the try blocks and automatically create the catch blocks. Bulky java code is collected from GitHub and splitted into several different fragments. For localization of the try block, Bidirectional LSTM (BiLSTM) is used initially as a token level encoder and then as a statement-level encoder. Then, the Support Vector Machine (SVM) is used to predict the try block present in the given source code. For generating a catch block, BiLSTM is initially used as an encoder, and LSTM is used as a decoder. Then, SVM is used here to predict the noisy tokens. The loss functions of this encoder-decoder model have been trained to be as small as possible. The trained model then uses the black widow method to forecast the following tokens one by one and then generates the entire catch block. The proposed work reaches 85% accuracy for try block localization and 50% accuracy for catch block generation. An improved LSTM with an attention mechanism method produces an optimal solution compared to the existing techniques. Thus the proposed method is the best choice for handling the exceptions.</p>","PeriodicalId":501576,"journal":{"name":"Cluster Computing","volume":"82 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141739080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Causal network construction based on KICA-ECCM for root cause diagnosis of industrial processes 基于 KICA-ECCM 的因果网络构建,用于工业流程的根本原因诊断
Cluster Computing Pub Date : 2024-07-20 DOI: 10.1007/s10586-024-04663-5
Yayin He, Xiangshun Li
{"title":"Causal network construction based on KICA-ECCM for root cause diagnosis of industrial processes","authors":"Yayin He, Xiangshun Li","doi":"10.1007/s10586-024-04663-5","DOIUrl":"https://doi.org/10.1007/s10586-024-04663-5","url":null,"abstract":"<p>Root cause diagnosis is able to find the propagation path of faults timely when the fault occurs. Therefore, it is of key significance in the maintenance and fault diagnosis of industrial systems. A commonly used method for root cause diagnosis is causal analysis method. In this work, a causal analysis method Extended Convergent Cross Mapping (ECCM) algorithm is used for root cause diagnosis of industry, however, it has difficulties in dealing with large amounts of steady state data and obtaining accurate propagation paths. Therefore, a causal analysis method based on Kernel Independent Component Analysis (KICA) and ECCM is proposed in this study to deal with the above problems. First, the KICA algorithm is used to detect faults to get the transition process data. Second, the ECCM algorithm is used to construct causal relationship among variables based on the transition process data to construct the fault propagation path diagram. Finally, the effectiveness of the proposed KICA-ECCM algorithm is tested by using the Tennessee Eastman Process and Industrial Process Control Test Facility platform. Compared with the ECCM and GC algorithm, the KICA-ECCM algorithm performs better in terms of accuracy and efficiency.</p>","PeriodicalId":501576,"journal":{"name":"Cluster Computing","volume":"48 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141739192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信