Automated federated learning-based adversarial attack and defence in industrial control systems

IF 1.5 Q3 AUTOMATION & CONTROL SYSTEMS
Guo-Qiang Zeng, Jun-Min Shao, Kang-Di Lu, Guang-Gang Geng, Jian Weng
{"title":"Automated federated learning-based adversarial attack and defence in industrial control systems","authors":"Guo-Qiang Zeng,&nbsp;Jun-Min Shao,&nbsp;Kang-Di Lu,&nbsp;Guang-Gang Geng,&nbsp;Jian Weng","doi":"10.1049/csy2.12117","DOIUrl":null,"url":null,"abstract":"<p>With the development of deep learning and federated learning (FL), federated intrusion detection systems (IDSs) based on deep learning have played a significant role in securing industrial control systems (ICSs). However, adversarial attacks on ICSs may compromise the ability of deep learning-based IDSs to accurately detect cyberattacks, leading to serious consequences. Moreover, in the process of generating adversarial samples, the selection of replacement models lacks an effective method, which may not fully expose the vulnerabilities of the models. The authors first propose an automated FL-based method to generate adversarial samples in ICSs, called AFL-GAS, which uses the principle of transfer attack and fully considers the importance of replacement models during the process of adversarial sample generation. In the proposed AFL-GAS method, a lightweight neural architecture search method is developed to find the optimised replacement model composed of a combination of four lightweight basic blocks. Then, to enhance the adversarial robustness, the authors propose a multi-objective neural architecture search-based IDS method against adversarial attacks in ICSs, called MoNAS-IDSAA, by considering both classification performance on regular samples and adversarial robustness simultaneously. The experimental results on three widely used intrusion detection datasets in ICSs, such as secure water treatment (SWaT), Water Distribution, and Power System Attack, demonstrate that the proposed AFL-GAS method has obvious advantages in evasion rate and lightweight compared with other four methods. Besides, the proposed MoNAS-IDSAA method not only has a better classification performance, but also has obvious advantages in model adversarial robustness compared with one manually designed federated adversarial learning-based IDS method.</p>","PeriodicalId":34110,"journal":{"name":"IET Cybersystems and Robotics","volume":null,"pages":null},"PeriodicalIF":1.5000,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/csy2.12117","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IET Cybersystems and Robotics","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1049/csy2.12117","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

With the development of deep learning and federated learning (FL), federated intrusion detection systems (IDSs) based on deep learning have played a significant role in securing industrial control systems (ICSs). However, adversarial attacks on ICSs may compromise the ability of deep learning-based IDSs to accurately detect cyberattacks, leading to serious consequences. Moreover, in the process of generating adversarial samples, the selection of replacement models lacks an effective method, which may not fully expose the vulnerabilities of the models. The authors first propose an automated FL-based method to generate adversarial samples in ICSs, called AFL-GAS, which uses the principle of transfer attack and fully considers the importance of replacement models during the process of adversarial sample generation. In the proposed AFL-GAS method, a lightweight neural architecture search method is developed to find the optimised replacement model composed of a combination of four lightweight basic blocks. Then, to enhance the adversarial robustness, the authors propose a multi-objective neural architecture search-based IDS method against adversarial attacks in ICSs, called MoNAS-IDSAA, by considering both classification performance on regular samples and adversarial robustness simultaneously. The experimental results on three widely used intrusion detection datasets in ICSs, such as secure water treatment (SWaT), Water Distribution, and Power System Attack, demonstrate that the proposed AFL-GAS method has obvious advantages in evasion rate and lightweight compared with other four methods. Besides, the proposed MoNAS-IDSAA method not only has a better classification performance, but also has obvious advantages in model adversarial robustness compared with one manually designed federated adversarial learning-based IDS method.

Abstract Image

工业控制系统中基于联合学习的自动对抗攻防
随着深度学习和联合学习(FL)的发展,基于深度学习的联合入侵检测系统(IDS)在确保工业控制系统(ICS)安全方面发挥了重要作用。然而,对 ICS 的恶意攻击可能会削弱基于深度学习的 IDS 准确检测网络攻击的能力,从而导致严重后果。此外,在生成对抗样本的过程中,替换模型的选择缺乏有效方法,可能无法完全暴露模型的漏洞。作者首先提出了一种基于 FL 的自动生成 ICS 中对抗样本的方法,称为 AFL-GAS,该方法采用转移攻击原理,在生成对抗样本的过程中充分考虑了替换模型的重要性。在所提出的 AFL-GAS 方法中,开发了一种轻量级神经架构搜索方法,以找到由四个轻量级基本模块组合而成的优化替换模型。然后,为了增强对抗鲁棒性,作者提出了一种基于多目标神经架构搜索的 IDS 方法,即 MoNAS-IDSAA,同时考虑了常规样本的分类性能和对抗鲁棒性,以对抗 ICS 中的对抗性攻击。在安全水处理(SWaT)、配水和电力系统攻击等三个广泛应用于 ICS 的入侵检测数据集上的实验结果表明,与其他四种方法相比,所提出的 AFL-GAS 方法在规避率和轻量级方面具有明显优势。此外,与一种人工设计的基于联盟对抗学习的 IDS 方法相比,所提出的 MoNAS-IDSAA 方法不仅具有更好的分类性能,而且在模型对抗鲁棒性方面也具有明显优势。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IET Cybersystems and Robotics
IET Cybersystems and Robotics Computer Science-Information Systems
CiteScore
3.70
自引率
0.00%
发文量
31
审稿时长
34 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信