{"title":"EEG-Based Epilepsy Recognition via Federated Learning With Differential Privacy","authors":"Yuling Luo, Bingxiong Jiang, Sheng Qin, Qiang Fu, Shunsheng Zhang","doi":"10.1002/cpe.70072","DOIUrl":"https://doi.org/10.1002/cpe.70072","url":null,"abstract":"<div>\u0000 \u0000 <p>Epilepsy is a complex chronic brain disorder that can be identified by observing brain signals. In general, the electroencephalogram (EEG) can be used to detect these brain signals. In order to produce a high-quality model, data from numerous patients can be gathered on a central server. However, sending the patient's raw data to the central computer may lead to privacy leakage. To address this problem, this work uses federated learning and differential privacy to train the model jointly. Furthermore, the epilepsy data is unbalanced as seizure only happens for a minority of time in one day, which influences the performance of the model. Thus, this work also uses label-distribution-aware-margin (LDAM) loss to solve this issue. This work is evaluated in intracranial EEG datasets, which consist of two dogs' EEG records. The global model trained jointly with LDAM loss can achieve an accuracy of 96.95%, a sensitivity of 78.9%, a specificity of 96.145%, an F1 score of 70.435%, and a geometric mean of 87.785%. Compared with the other works, the accuracy has improved by about ˜9.31%, while the specificity and the geometric mean have also improved by about ˜10.75% and ˜1.8%, respectively.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 9-11","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143826694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Modern Advances in Artificial Intelligence Across the Athletic Domain","authors":"Xin Zhao, Haotian Zheng, Qiyun Zhang","doi":"10.1002/cpe.70068","DOIUrl":"https://doi.org/10.1002/cpe.70068","url":null,"abstract":"<div>\u0000 \u0000 <p>Artificial intelligence (AI) is a branch of computer science focused on creating systems that can perform tasks typically requiring human intelligence, such as learning, reasoning, and understanding natural language. The integration of AI into the athletic domain has revolutionized various aspects, from performance analysis to fan engagement and operational efficiency. Recent advancements have showcased the potential of AI technologies, including machine learning, computer vision, and data analytics, to enhance athletic performance, optimize training regimens, and provide personalized experiences for fans. This review examines the current landscape of AI applications in athletics, highlighting innovative use cases such as player performance monitoring, injury prevention, game strategy optimization, and audience engagement. It underscores the transformative impact of AI on traditional athletic practices while also addressing the challenges and ethical concerns associated with its growing adoption. Future directions for research and development in this field are also discussed, emphasizing the need for ongoing collaboration between technologists and athletic professionals.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 9-11","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143822268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Density Viewpoint Clustering Outlier Detection Algorithm With Feature Weight and Entropy","authors":"Gezi Shi, Jing Li, Tangbao Zou, Haiyan Yu, Feng Zhao","doi":"10.1002/cpe.70086","DOIUrl":"https://doi.org/10.1002/cpe.70086","url":null,"abstract":"<div>\u0000 \u0000 <p>The k-means outlier removal (KMOR) algorithm uses the distance criterion to measure similarity in cluster analysis for outlier detection and places outliers in a separate cluster to achieve outlier detection with clustering. However, the distance-based clustering outlier detection algorithm has poor effect and is sensitive to parameters and clustering center for the datasets with a special distribution and large number of outliers. Therefore, this article proposes a density viewpoint clustering outlier detection algorithm with feature weighting and entropy by introducing feature and entropy information. First, the algorithm introduces the entropy regularization into the objective function to control the clustering process by minimizing the clustering dispersion and maximizing the negative entropy. Second, feature weight and regularization strategies are introduced in the objective function and outlier detection criteria to improve the detection accuracy of the algorithm for feature-imbalanced datasets while controlling the weight of features. In addition, the weighted distance function of data dimension normalization is used to calculate the viewpoint, and the correct clustering center is formed by density viewpoint guidance to improve the overall performance. Finally, five experiments by synthetic datasets show that the algorithm has an average classification accuracy of 98.22<span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mo>%</mo>\u0000 </mrow>\u0000 <annotation>$$ % $$</annotation>\u0000 </semantics></math>, which is higher than other algorithms. Further demonstrated by ten UCI datasets show that the algorithm can balance data classification and outlier detection well.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 9-11","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143822269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Building Scalable and Quantum Attack Resistant Authenticated Message Delivery System for Internet of Vehicles With Blockchain Consensus Mechanism","authors":"Rahul Singh, Laxminarayan Das, Dharminder Chaudhary","doi":"10.1002/cpe.70071","DOIUrl":"https://doi.org/10.1002/cpe.70071","url":null,"abstract":"<div>\u0000 \u0000 <p>The goal of intelligent transportation systems is becoming more and more realized with the use of the Internet of Vehicles (IoV) and the rapid advancement of processing and communication technologies. Nonetheless, a lot of Internet of Vehicles applications depend on a central processing and storage unit as well as wireless transmission mediators. This may result in exorbitant expenses and delays, as well as the disclosure of real data. We suggest the Vehicle-Based Quantum-Safe Blockchain Consensus (VBQBC) algorithm as a solution to these problems and an enhancement of the effectiveness of data storage, authentic processing, and data sharing in the Internet of Vehicles. The proposed VBQBC algorithm uses a consensus algorithm and blockchain technology to assure authentic communication between cars, overcoming the shortcomings and constraints of current state-of-the-art systems. This algorithm uses ring learning with errors, and short integer solution assumptions in the construction of aggregation signatures to provide authenticity in blockchain technology. This aggregation technique reduces the size of data required for verification and improves scalability. The algorithm also incorporates a quantum-safe authentication procedure as well as a key distribution and request process, which are demonstrated when vehicles move between different zones. This allows blockchain-based systems to maintain their security, scalability, and efficiency even in the face of future cryptographic problems. The aggregation of signatures size for the proposed framework to the number of signatures to be aggregated <i>N</i> varies between <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <msup>\u0000 <mn>2</mn>\u0000 <mn>10</mn>\u0000 </msup>\u0000 </mrow>\u0000 <annotation>$$ {2}^{10} $$</annotation>\u0000 </semantics></math> (63.48 kb) and <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <msup>\u0000 <mn>2</mn>\u0000 <mn>20</mn>\u0000 </msup>\u0000 </mrow>\u0000 <annotation>$$ {2}^{20} $$</annotation>\u0000 </semantics></math> (131.34 kb). For aggregation of <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <msup>\u0000 <mn>2</mn>\u0000 <mn>10</mn>\u0000 </msup>\u0000 </mrow>\u0000 <annotation>$$ {2}^{10} $$</annotation>\u0000 </semantics></math> signatures, the proposed framework has signature size 6.3 kb, and the aggregated size of signatures is 64 kb. In simulation findings, our suggested VBQBC algorithm outperformed previous techniques in terms of authentication delay, key processing time, attack detection rate, throughput, and packet loss rate.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 9-11","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143818318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. P. Uvarajan, Kishore Balasubramanian, C. Gowri Shankar
{"title":"Unveiling the Invisible: Powering Security Threat Detection in WSN With AI","authors":"K. P. Uvarajan, Kishore Balasubramanian, C. Gowri Shankar","doi":"10.1002/cpe.70049","DOIUrl":"https://doi.org/10.1002/cpe.70049","url":null,"abstract":"<div>\u0000 \u0000 <p>Security in wireless sensor networks (WSNs) is of paramount importance due to their pervasive deployment in critical infrastructure and sensitive environments. Despite their ubiquitous nature, WSNs are vulnerable to various security threats, ranging from unauthorized access to data manipulation and network disruption. In response to these challenges, this paper proposes a novel approach leveraging the Base Stacked Long Short-Term Memory with Attention Models and AdaBoost Ensemble (BSLAM-AE) architecture to enhance security in WSNs. The proposed model is designed to address the unique characteristics and challenges of WSNs, combining deep learning and ensemble learning techniques to detect and mitigate security threats effectively. The BSLAM-AE model incorporates stacked LSTM networks with attention mechanisms, enabling the analysis of time-series data and the detection of subtle anomalies or security breaches. In addition, an AdaBoost ensemble-learning component iteratively trains a set of models to improve predictive accuracy and robustness. Implemented in the PyCharm integrated development environment, experimental results demonstrate the efficacy of the proposed model, achieving an impressive accuracy of 98% in detecting security threats in WSNs. Overall, the BSLAM-AE model represents a significant advancement in WSN security, offering a comprehensive and efficient solution for detecting and mitigating security threats. By leveraging deep learning and ensemble learning techniques, the proposed model provides enhanced security and reliability, thereby safeguarding WSNs against potential attacks and ensuring the integrity and availability of critical data and infrastructure.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 9-11","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143818385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Multi-Strategy Fusion for Mobile Robot Path Planning via Dung Beetle Optimization","authors":"Junhu Peng, Tao Peng, Can Tang, Xingxing Xie","doi":"10.1002/cpe.70060","DOIUrl":"https://doi.org/10.1002/cpe.70060","url":null,"abstract":"<div>\u0000 \u0000 <p>In recent years, robot path planning has become a critical aspect of autonomous navigation, especially in dynamic and complex environments where robots must operate efficiently and safely. One of the primary challenges in this domain is achieving high convergence efficiency while avoiding local optimal solutions, which can hinder the robot's ability to find the best possible path. Additionally, ensuring that the robot follows a path with minimal turns and reduced path length is essential for enhancing operational efficiency and reducing energy consumption. These challenges become even more pronounced in high-dimensional optimization tasks where the search space is vast and difficult to navigate. In this article, a multi-strategy fusion enhanced dung beetle optimization algorithm (MIDBO) is introduced to tackle key challenges in robot path planning, such as slow convergence and the problem of local optima, and so on, in which MIDBO incorporates several key innovations to enhance performance and robustness. First, the Tent chaotic strategy is used to diversify initial solutions during population initialization, thereby mitigating the risk of local optima and improving global search capability. Second, a penalty term is integrated into the fitness function to penalize excessive turning angles, aiming to reduce the frequency and magnitude of turns. This modification results in smoother and more efficient paths with reduced lengths. Third, the inertia weight is adaptively updated by a sine-based mechanism, which dynamically balances exploration and exploitation, accelerates convergence, and enhances algorithm stability. To further improve efficiency for path planning, the MIDBO integrates a Levy flight strategy and a local search mechanism to boost the search capability during the stealing phase, contributing to smoother and more practical paths planned for the robot. A series of thorough and reproducible experiments are performed using benchmark test functions to evaluate the performance of MIDBO in comparison to several leading metaheuristic algorithms. The results demonstrate that MIDBO achieves superior outcomes in path planning tasks with optimal and mean path lengths of 42.1068 and 44.4755, respectively, which significantly outperforms other algorithms including IPSO (47.6244, 55.9375), original DBO (47.6244, 55.9375), and ISSA (47.6244, 55.9375). MIDBO also markedly reduces the number of turns by achieving best and average values of 10 and 13.4, respectively, compared with IPSO (11, 16.1), original DBO (12, 15.3), and ISSA (12, 16.4). Besides, the consistent performance of MIDBO is confirmed via stability analysis based on the mean square error of path lengths and turn counts across 10 independent trials. For the high-dimensional optimization tasks, MIDBO achieves 8 and 7 functions about top rankings on 50- and 100-dimensional functions, and specifically MIDBO outperforms DBO, IPSO, and ISSA on 13, 18, and 11 functio","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 9-11","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143818386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A New Approach to Cloud Resource Scheduling Using Genetic Reinforcement Kernel Optimization and Machine Learning Model","authors":"Anupam Yadav, Ashish Sharma","doi":"10.1002/cpe.70080","DOIUrl":"https://doi.org/10.1002/cpe.70080","url":null,"abstract":"<div>\u0000 \u0000 <p>Two of the most essential elements of cloud resource management systems are resource management as well as scheduling. Due to the heterogeneity of resources, their interdependencies, and the unpredictable nature of load in a cloud environment, cloud Resource Scheduling (RS) is the most important issue to handle effectively. One of the most challenging tasks in cloud computing is RS, where resources must be assigned to the necessary tasks or jobs in accordance with the necessary Quality of Service (QoS) of the cloud applications. This study suggests a unique method for scheduling cloud resources based on a virtual data center and machine learning model. The genetic reinforcement kernel swarm optimization methodology and cloud data centers are deployed here. The suggested network analysis aims to balance energy usage and SLA. The suggested framework is assessed by doing experiments on the Google Cluster dataset, Planet Lab, and Bitbrains VM traces, as well as three real-world workload datasets. Execution cost, Execution time, Makespan, Energy consumption, Resource utilization, and Scalability are factors that were examined here. The proposed framework is superior to several performance indicators compared to state-of-the-art techniques. The project's methodology addresses the issues of load imbalance and excessive migration expenses.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 9-11","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143818764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Customer-Satisfaction and Risk-Aware Pricing Policy for Profit Maximization in Cloud Computing","authors":"Siyi Chen, Haiyang Kuang, Jiaxin Zhou, Jin Liu","doi":"10.1002/cpe.70075","DOIUrl":"https://doi.org/10.1002/cpe.70075","url":null,"abstract":"<div>\u0000 \u0000 <p>Cloud computing has been attracting increasing concern due to its remarkable flexibility and substantial economic returns. As a crucial feature of cloud computing, cloud service providers typically offer customers various resources through on-demand services and charge them accordingly. Considering servers as the service carriers in cloud computing, their availability may sometimes be affected by potential uncertainties, which can lead to temporary server breakdowns. It should be noted that such cases related to the availability of servers have rarely been examined in the profit maximization problem. In this paper, an optimal configuration scheme for addressing the issue of profit maximization in the cloud service system is proposed, taking server breakdowns into account. By introducing the concept of generalized service time, the probability density function of waiting time is initially derived. Owing to the complexity of the designed function, a two-stage fitting method is employed to obtain its approximation, thereby significantly reducing the analysis difficulty. On this basis, considering the average waiting time and server breakdowns, a compensation mechanism is adopted to design the pricing strategy for customers. Then, the revenue, cost, and profit models of cloud service providers are constructed. Since the optimal solution of the profit model is difficult to calculate due to its complexity, a heuristic algorithm is introduced to solve its numerical solution. Finally, the proposed scheme is verified through a series of numerical simulations, and the results indicate that the profits of cloud service providers and customer experience can be satisfied simultaneously.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 9-11","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143818766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Variantrank: Business Process Event Log Sampling Based on Importance of Trace Variants","authors":"Jiawei Chen, Jiayi Zhong, Guosheng Kang, Jianxun Liu, Yiping Wen","doi":"10.1002/cpe.70092","DOIUrl":"https://doi.org/10.1002/cpe.70092","url":null,"abstract":"<div>\u0000 \u0000 <p>To address the issues of low sampling quality and efficiency in processing large-scale event logs in existing business process event log sampling methods, a new method, named VariantRank, is proposed, which is based on the importance of trace variants. First, the importance of each trace variant is calculated based on the activity importance and the importance of directly-follow relationships within the trace variants. Then, the trace variants are ranked according to their importance. Finally, based on the given sampling rate and the ranking of trace variants, the final sampling is performed to obtain the sample event logs. The effectiveness of the proposed sampling method is evaluated in terms of both sampling quality and sampling efficiency across 8 public event log datasets. The experimental analysis shows that, compared with the state-of-the-art sampling methods, VariantRank improves the sampling efficiency while ensuring the sampling quality.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 9-11","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143818737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"High-Precision Indoor Visible Light Positioning Method for Line-Of-Sight Scenes Based on a Spatiotemporal Sequence Attention Mechanism","authors":"Yonghao Yu, Dawei Zhao, Yongwei Tang, WengTak Kuok, Wei Ding","doi":"10.1002/cpe.70058","DOIUrl":"https://doi.org/10.1002/cpe.70058","url":null,"abstract":"<div>\u0000 \u0000 <p>Using deep learning to improve the accuracy of indoor visible light positioning (VLP) systems has gradually become a widely used research strategy in the field. However, current deep learning-based indoor visible light localization algorithms have not been able to effectively mine the deep temporal and spatial sequence features in signals, resulting in complex network construction and low localization accuracy. To address this issue, the text proposes a deep learning framework that utilizes an attention mechanism to train a small number of randomly continuously sampled spatial received signals to predict the coordinates of the received signals and encode the spatiotemporal sequence attributes of the received signals as a feature into the data, constructed a highly reliable spatiotemporal sequence attention mechanism for indoor visible light localization method. Combined with Convolutional Neural Networks (CNN), the localization accuracy is further improved. Through simulation experiments, it has been verified that the neural network structure designed in this paper has better positioning accuracy compared to advanced algorithms, and can still achieve centimeter-level (9.886cm) average positioning error under low signal-to-noise ratio (SNR) conditions. It is proved that the method proposed in this paper is reliable in the indoor VLP system.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 9-11","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143818738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}