{"title":"Industrial control system intrusion detection method based on belief rule base with gradient descent","authors":"Jinyuan Li , Guangyu Qian , Wei He , Wei Zhang","doi":"10.1016/j.cose.2025.104488","DOIUrl":"10.1016/j.cose.2025.104488","url":null,"abstract":"<div><div>Intrusion detection is important for maintaining the smooth operation of industrial control systems (ICSs). The belief rule base (BRB), as a hybrid information-driven model, has been widely used in various fields because of its high accuracy and good interpretability. However, when facing intrusion detection problems in ICSs with high-dimensional features, excessive rules often arise, leading to slow model inference and optimization due to the large number of rules. Therefore, this paper proposes an interval structure belief rule base with mini-batch gradient descent optimization (IBRB-MBGD) for ICS intrusion detection. First, to address the issue of rule explosion caused by high-dimensional features, a new modeling approach is proposed that uses reference intervals instead of single values, and the rule generation mode is changed from conjunction to disjunction, further improving the model inference method and effectively solving the combination rule explosion. Second, the large amount of historical data slows down the model optimization process; thus, an optimization method based on minibatch gradient descent is proposed to quickly optimize the parameters in the BRB. Finally, experiments were conducted on natural gas pipeline system and water storage tank system intrusion detection data, and the detection rate reached >90 %, verifying the effectiveness of the model.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"155 ","pages":"Article 104488"},"PeriodicalIF":4.8,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143828563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Updating Windows malware detectors: Balancing robustness and regression against adversarial EXEmples","authors":"Matous Kozak , Luca Demetrio , Dmitrijs Trizna , Fabio Roli","doi":"10.1016/j.cose.2025.104466","DOIUrl":"10.1016/j.cose.2025.104466","url":null,"abstract":"<div><div>Adversarial EXEmples are carefully-perturbed programs tailored to evade machine learning Windows malware detectors, with an ongoing effort to develop robust models able to address detection effectiveness. However, even if robust models can prevent the majority of EXEmples, to maintain predictive power over time, models are fine-tuned to newer threats, leading either to partial updates or time-consuming retraining from scratch. Thus, even if the robustness against adversarial EXEmples is higher, the new models might suffer a regression in performance by misclassifying threats that were previously correctly detected. For these reasons, we study the trade-off between accuracy and regression when updating Windows malware detectors by proposing EXE-scanner, a plugin that can be chained to existing detectors to promptly stop EXEmples without causing regression. We empirically show that previously proposed hardening techniques suffer a regression of accuracy when updating non-robust models, exacerbating the gap when considering low false positives regimes and temporal drifts affecting data. Also, through EXE-scanner we gain evidence on the detectability of adversarial EXEmples, showcasing the presence of artifacts left inside while creating them. Due to its design, EXE-scanner can be chained to any classifier to obtain the best performance without the need for costly retraining. To foster reproducibility, we openly release the source code, along with the dataset of adversarial EXEmples based on state-of-the-art perturbation algorithms.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"155 ","pages":"Article 104466"},"PeriodicalIF":4.8,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143843561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kelly A Cole , Alexander L. Francis , Marcus K Rogers , Joe Balazs
{"title":"Can individual differences in cognitive capacity predict cybersecurity performance?","authors":"Kelly A Cole , Alexander L. Francis , Marcus K Rogers , Joe Balazs","doi":"10.1016/j.cose.2025.104497","DOIUrl":"10.1016/j.cose.2025.104497","url":null,"abstract":"<div><div>Cyber defense analysts work in highly demanding environments, making numerous critical decisions on the basis of complex information presented at a rapid rate. Individual differences in cognitive capacity may contribute significantly to cyber performance success but researchers have yet to definitively determine what cognitive functions play the most significant role in cybersecurity performance. To further understand the role of individual differences in attention and cognition that affect cyber performance in the cybersecurity domain, this exploratory study involved collecting measures of cognitive performance from 19 experienced cybersecurity analysts using three well-established measures designed for individual differences research. The individual differences were compared with behavioral performance (accuracy and response time) in a simulated incident detection system identified as ecologically valid in cybersecurity research. Results showed a significant relationship between incident detection performance and multiple cognitive variables. Specifically, analysts with higher working memory capacity performed more accurately and more quickly, while those that showed more evidence of distractibility performed less well. These findings suggest that easily obtained behavioral measures of attention control may be useful for: (a) identifying individuals who may be better suited to performing specific cognitive demands of incident detection tasks, and (b) informing system design to reduce demands on specific areas of cognitive processing to improve incident response decisions.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"155 ","pages":"Article 104497"},"PeriodicalIF":4.8,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143843563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nicola Bena , Marco Anisetti , Ernesto Damiani , Chan Yeob Yeun , Claudio A. Ardagna
{"title":"Protecting machine learning from poisoning attacks: A risk-based approach","authors":"Nicola Bena , Marco Anisetti , Ernesto Damiani , Chan Yeob Yeun , Claudio A. Ardagna","doi":"10.1016/j.cose.2025.104468","DOIUrl":"10.1016/j.cose.2025.104468","url":null,"abstract":"<div><div>The ever-increasing interest in and widespread diffusion of Machine Learning (ML)-based applications has driven a substantial amount of research into offensive and defensive ML. ML models can be attacked from different angles: poisoning attacks, the focus of this paper, inject maliciously crafted data points in the training set to modify the model behavior; adversarial attacks maliciously manipulate inference-time data points to fool the ML model and drive the prediction of the ML model according to the attacker’s objective. Ensemble-based techniques are among the most relevant defenses against poisoning attacks and replace the monolithic ML model with an ensemble of ML models trained on different (disjoint) subsets of the training set. They assign data points to the training sets of the models in the ensemble (routing) randomly or using a hash function, assuming that evenly distributing poisoned data points positively influences ML robustness. Our paper departs from this assumption and implements a risk-based ensemble technique where a risk management process is used to perform a smart routing of data points to the training sets. An extensive experimental evaluation demonstrates the effectiveness of the proposed approach in terms of its soundness, robustness, and performance.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"155 ","pages":"Article 104468"},"PeriodicalIF":4.8,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143850079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"FLADEN: Federated Learning for Anomaly DEtection in IoT Networks","authors":"Fatma Hendaoui , Rahma Meddeb , Lamia Trabelsi , Ahlem Ferchichi , Rawia Ahmed","doi":"10.1016/j.cose.2025.104446","DOIUrl":"10.1016/j.cose.2025.104446","url":null,"abstract":"<div><div>Sensitive applications are strict in terms of data privacy. In this context, intrusion detection systems cannot access the data and analyze it to discover attacks signatures. As a result, it is necessary to analyze data locally without disclosing it to a third party. Machine learning models can achieve this task. This paper proposes a machine-learning framework for intrusion detection on IoT networks. The proposed framework enables participating entities to analyze their data more efficiently and privately. A new real-world dataset is generated using online threat intelligence sources. FLADEN updates the federated learning library to optimize processing time with an accuracy of 99.85%. The proposed framework was applied to machine learning models and shows a precision of 99. 89%, an F1 score of 99. 93%, and a recall of 99.91%. This work presents implications for those researchers who may focus on large-scale anomaly detection with privacy preservation in IoT networks.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"155 ","pages":"Article 104446"},"PeriodicalIF":4.8,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143833984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yanze Qu , Hailong Ma , Chaofan Zheng , Yiming Jiang , Wenbo Wang
{"title":"A malware traffic detection method based on Victim-Attacker interaction patterns","authors":"Yanze Qu , Hailong Ma , Chaofan Zheng , Yiming Jiang , Wenbo Wang","doi":"10.1016/j.cose.2025.104487","DOIUrl":"10.1016/j.cose.2025.104487","url":null,"abstract":"<div><div>The widespread adoption of encryption protocols has provided benefits for personal privacy, while also offering cover for the command and control (C&C) communication of malware such as Trojans, presenting significant challenges to existing network monitoring systems. Existing methods exhibit limited capacity to discern threats across network flows, while neglecting the prevalent packet loss phenomenon in real-world network environments. This paper proposes a malware traffic detection method based on the interaction patterns between compromised hosts and C&C servers. With a novel detection unit called channel unit representing interaction patterns, compared to existing methods, our proposed method is capable of discerning threats across network flows and is more resilient to packet loss. Evaluation experiments show that our method has superior detection performance in both binary and multi-class classification scenarios, achieving accuracy rates of 99.84 % and 96.08 % respectively. In terms of packet loss tolerance, compared with existing methods, our method exhibits the minimal performance degradation under a 20 % packet loss rate, maintaining a multi-classification accuracy of 99.63 % and a binary classification accuracy of 95.72 %.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"155 ","pages":"Article 104487"},"PeriodicalIF":4.8,"publicationDate":"2025-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143821507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Securing IoT devices in edge computing through reinforcement learning","authors":"Anit Kumar , Dhanpratap Singh","doi":"10.1016/j.cose.2025.104474","DOIUrl":"10.1016/j.cose.2025.104474","url":null,"abstract":"<div><div>The exponentially increasing demand for IoT devices with the expectation of maximum fulfillment of the user needs to bring the integration of the Edger server on the premise of the IoT devices. The small size but the need for complex computation and high-end software requires the amount of additional hardware setup that can never be possible with the absence of an Edge server. Since the Edger server continuously gathers the data from the IoT device for further computation and permanent storage in either local storage or a cloud server, it attracts intruders to try to steal sensitive data of the IoT devices from the Edge server. With the presence of many artificial intelligence tools, an intruder can make serious attacks on the Edger server by breaking its security boundaries. Any individual autonomous entity like a robot, satellite, or self-driving vehicle has a set of interconnected IoT devices (sensors) to form a network, which needs to be so flexible that any new IoT device can easily be integrated into this network without any major difficulties. None of the organizations has ever adopted non-scalable IoT networks. To counter such security challenges, we propose a scalable, robust, and reliable Novel Reinforcement Learning approach having a proper task scheduling mechanism that is powered by using the epsilon-greedy search Q-learning method. The novelty of our proposed method is its high performance which allows the agent to take actions at the time only when it finds a noticeable drop in the network performance in terms of packet delivery ratio, average throughput, and end-to-end delay hyperparameters. Experiments carried out by us along with simulation and real datasets, prove that our proposed security method provides outstanding results as compared to other security approaches discussed in this paper and can counter malicious attacks efficiently. Once our security model gets trained with a threshold amount of times, then after this threshold time, we observe that no benign data packets are lost even with the presence of any external threats and always provide stable communication to the end users. The proposed novel reinforcement learning method is more consistent, resilient, scalable, and accurate than other similar machine learning-based security methods and always has a false positive rate of <2 %.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"155 ","pages":"Article 104474"},"PeriodicalIF":4.8,"publicationDate":"2025-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143828565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Anomalous identity recognition model based on vehicle driving characteristic verification in typical scenarios","authors":"Xing Chen, Jingsheng Wang, Song Yan, Zuyin Wang","doi":"10.1016/j.cose.2025.104476","DOIUrl":"10.1016/j.cose.2025.104476","url":null,"abstract":"<div><div>Vehicle-to-everything (V2X) enables the exchange and sharing of information between vehicles and the outside world, which improves driving safety, reduces traffic congestion, and enhances traffic efficiency. However, this information exchange and transmission of massive data also exposes many attack surfaces, which may result in security incidents such as vehicle theft, information leakage, and driving failure. Traditional methods to ensure traffic information interaction through information security have limitations. This paper proposes an innovative model for anomalous identity recognition based on vehicle driving characteristic verification. The model aims to ensure consistency among the speed data from different sources, types of transmission data, and perception data obtained by sensors. The model is based on a multi-class support vector machine (multi-class SVM) to identify vehicle behavior and a bidirectional gated recurrent unit (BiGRU) neural network to predict vehicle speed. A credible calculation method was designed to calculate the error between the predicted speed and the actual collected speed in the car-following and lane-changing scenarios. The Next Generation Simulation dataset was used to train and test the models. The experimental results showed that the overall recognition accuracy of the multi-class SVM model was 95.50 %, the predicted precision with an order of magnitude of cm/s was achieved by the BiGRU model, and the overall recognition accuracy of the model was >90 %. The public key infrastructure (PKI) scheme is currently the mainstream scheme of information security in the Internet of Vehicles. This paper analyzes the feasibility of the proposed anomalous identity recognition model applied in the PKI framework, which can effectively identify anomalous vehicle identities by discriminating the vehicle speed and effectively ensure the security between a vehicle and the external network communication (4G/5G/V2X).</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"155 ","pages":"Article 104476"},"PeriodicalIF":4.8,"publicationDate":"2025-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143817725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ruiyang Ding , Lei Sun , Zhiyi Ding, Weifei Zang, Leyu Dai
{"title":"Towards targeted and universal adversarial attacks against network traffic classification","authors":"Ruiyang Ding , Lei Sun , Zhiyi Ding, Weifei Zang, Leyu Dai","doi":"10.1016/j.cose.2025.104470","DOIUrl":"10.1016/j.cose.2025.104470","url":null,"abstract":"<div><div>With the continuous advancement of technology, deep learning has become the mainstream method in the field of network traffic classification, demonstrating excellent classification performance. However, due to the inherent vulnerability of deep learning models, they also face the threat of adversarial attacks. Currently, adversarial attack techniques for network traffic classification only remain at the level of untargeted attacks, and most of them are attack methods based on specific perturbation. These methods have high time overhead, high sample dependency, and are unable to perform targeted attacks on target categories, which poses significant limitations in practical applications. To this end, this article proposes a targeted and universal adversarial attack method against network traffic classification. It iteratively trains to minimize the distance between network traffic and the target category feature domain, thereby generating the universal perturbation vector for the target category. This maximizes the prediction probability of the model output target category, allowing the classifier to incorrectly predict any non-target category network traffic as the specified target category. Meanwhile, this article uses dynamic masking and modular operations to generate adversarial network traffic, ensuring the data reversibility and transferability of network traffic packets during adversarial attacks. Finally, this article selected three standard network traffic datasets with different classification tasks, CICIoT2023, ISCX2016, and USTC-TFC2016, as well as four mainstream network traffic classification models such as LeNet5, for experiments, and built the adversarial attack testing platform in the real network environment. The results show that the proposed method effectively implements targeted and universal adversarial attacks against network traffic classification on three datasets and four classification models, with the average attack success rate of over 56 % and the single attack time of 1–3 ms, greatly improving the application scope and practical value of adversarial attack techniques in the field of network traffic classification.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"155 ","pages":"Article 104470"},"PeriodicalIF":4.8,"publicationDate":"2025-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143817720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ayodeji James Akande, Zhe Hou, Ernest Foo, Qinyi Li
{"title":"LTL-based runtime verification framework for cyber-attack anomaly prediction in cyber–physical systems","authors":"Ayodeji James Akande, Zhe Hou, Ernest Foo, Qinyi Li","doi":"10.1016/j.cose.2025.104455","DOIUrl":"10.1016/j.cose.2025.104455","url":null,"abstract":"<div><div>An anomaly is any unexpected or abnormal behaviour, event, or data pattern within a network of physical and computational components caused by data errors, cyber-attacks, hardware failures, or other unforeseen events. Anomaly detection analyses events after they occur, while anomaly prediction forecasts them before they manifest. The increasing complexity of Cyber-Physical Systems (CPS) presents challenges in fault management and vulnerability to advanced attacks, highlighting the need for early intervention through anomaly prediction. Existing anomaly prediction methods often fail due to a lack of formal guarantees required for safety-critical applications. In this paper, we introduce our anomaly prediction framework which merges the advantages of data analytics and the derivation of Linear Temporal Logic (LTL) formulas. LTL-based runtime monitoring and checking is a well-established technique efficient for tackling challenges in real-time and promptly. The framework processes historical data, clusters them to extract predictive patterns, and forms data sequences that represent these trends. These sequences are fed into an LTL learning algorithm to produce a formula that represents the pattern. This formula functions as a security property programmed into a runtime checker to verify system correctness and predict the possibility of anomalies. We evaluated our framework using three datasets collected from a cyber-physical system testbed and the experimental findings demonstrate a minimum accuracy of 90% in predicting anomalies.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"155 ","pages":"Article 104455"},"PeriodicalIF":4.8,"publicationDate":"2025-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143817717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}