自动控制系统中的对抗性攻击与防御:综合基准

Vitaliy Pozdnyakov, Aleksandr Kovalenko, Ilya Makarov, Mikhail Drobyshevskiy, Kirill Lukyanov
{"title":"自动控制系统中的对抗性攻击与防御:综合基准","authors":"Vitaliy Pozdnyakov, Aleksandr Kovalenko, Ilya Makarov, Mikhail Drobyshevskiy, Kirill Lukyanov","doi":"arxiv-2403.13502","DOIUrl":null,"url":null,"abstract":"Integrating machine learning into Automated Control Systems (ACS) enhances\ndecision-making in industrial process management. One of the limitations to the\nwidespread adoption of these technologies in industry is the vulnerability of\nneural networks to adversarial attacks. This study explores the threats in\ndeploying deep learning models for fault diagnosis in ACS using the Tennessee\nEastman Process dataset. By evaluating three neural networks with different\narchitectures, we subject them to six types of adversarial attacks and explore\nfive different defense methods. Our results highlight the strong vulnerability\nof models to adversarial samples and the varying effectiveness of defense\nstrategies. We also propose a novel protection approach by combining multiple\ndefense methods and demonstrate it's efficacy. This research contributes\nseveral insights into securing machine learning within ACS, ensuring robust\nfault diagnosis in industrial processes.","PeriodicalId":501062,"journal":{"name":"arXiv - CS - Systems and Control","volume":"132 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Adversarial Attacks and Defenses in Automated Control Systems: A Comprehensive Benchmark\",\"authors\":\"Vitaliy Pozdnyakov, Aleksandr Kovalenko, Ilya Makarov, Mikhail Drobyshevskiy, Kirill Lukyanov\",\"doi\":\"arxiv-2403.13502\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Integrating machine learning into Automated Control Systems (ACS) enhances\\ndecision-making in industrial process management. One of the limitations to the\\nwidespread adoption of these technologies in industry is the vulnerability of\\nneural networks to adversarial attacks. This study explores the threats in\\ndeploying deep learning models for fault diagnosis in ACS using the Tennessee\\nEastman Process dataset. By evaluating three neural networks with different\\narchitectures, we subject them to six types of adversarial attacks and explore\\nfive different defense methods. Our results highlight the strong vulnerability\\nof models to adversarial samples and the varying effectiveness of defense\\nstrategies. We also propose a novel protection approach by combining multiple\\ndefense methods and demonstrate it's efficacy. This research contributes\\nseveral insights into securing machine learning within ACS, ensuring robust\\nfault diagnosis in industrial processes.\",\"PeriodicalId\":501062,\"journal\":{\"name\":\"arXiv - CS - Systems and Control\",\"volume\":\"132 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-03-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Systems and Control\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2403.13502\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Systems and Control","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2403.13502","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

将机器学习集成到自动控制系统(ACS)中可增强工业流程管理中的决策能力。在工业领域广泛采用这些技术的限制因素之一是神经网络容易受到恶意攻击。本研究利用田纳西州伊士曼过程数据集,探讨了在 ACS 中部署深度学习模型进行故障诊断所面临的威胁。通过评估三种不同架构的神经网络,我们让它们遭受了六种类型的恶意攻击,并探索了五种不同的防御方法。我们的结果凸显了模型在对抗样本面前的强大脆弱性,以及防御策略的不同有效性。我们还提出了一种结合多种防御方法的新型保护方法,并证明了它的有效性。这项研究为确保 ACS 中机器学习的安全、确保工业流程中稳健的故障诊断贡献了许多见解。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Adversarial Attacks and Defenses in Automated Control Systems: A Comprehensive Benchmark
Integrating machine learning into Automated Control Systems (ACS) enhances decision-making in industrial process management. One of the limitations to the widespread adoption of these technologies in industry is the vulnerability of neural networks to adversarial attacks. This study explores the threats in deploying deep learning models for fault diagnosis in ACS using the Tennessee Eastman Process dataset. By evaluating three neural networks with different architectures, we subject them to six types of adversarial attacks and explore five different defense methods. Our results highlight the strong vulnerability of models to adversarial samples and the varying effectiveness of defense strategies. We also propose a novel protection approach by combining multiple defense methods and demonstrate it's efficacy. This research contributes several insights into securing machine learning within ACS, ensuring robust fault diagnosis in industrial processes.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信