针对下一代物联网恶意软件家族分类器的基于动态触发的攻击

IF 4.8 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS
Yefei Zhang , Sadegh Torabi , Jun Yan , Chadi Assi
{"title":"针对下一代物联网恶意软件家族分类器的基于动态触发的攻击","authors":"Yefei Zhang ,&nbsp;Sadegh Torabi ,&nbsp;Jun Yan ,&nbsp;Chadi Assi","doi":"10.1016/j.cose.2024.104187","DOIUrl":null,"url":null,"abstract":"<div><div>The evolution of IoT malware and the effectiveness of defense strategies, e.g., leveraging malware family classification, have driven the development of advanced classification learning models. These models, particularly those that utilize model-extracted features, significantly enhance classification performance while minimizing the need for extensive expert knowledge from developers. However, a critical challenge lies in the interpretability of these learning models, which can obscure potential security risks. Among these risks are backdoor attacks, a sophisticated and deceptive threat where attackers induce malicious behaviors in the model under specific triggers.</div><div>In response to the growing need for integrity and reliability in these models, this work assesses the vulnerability of state-of-the-art IoT malware classification models to backdoor attacks. Given the complexities of attacking model-based classifiers, we propose a novel trigger generation framework, B-CTG, supported by a specialized training procedure. This framework enables B-CTG to dynamically poison or attack samples to achieve specific objectives. From an attacker’s perspective, the design and training of B-CTG incorporate knowledge from the IoT domain to ensure the attack’s effectiveness. We conduct experiments under two distinct knowledge assumptions: the main evaluation, which assesses the attack method’s performance when the attacker has limited control over the model training pipeline, and the transferred setting, which further explores the significance of knowledge in predicting attacks in real-world scenarios.</div><div>Our in-depth analysis focuses on attack performance in specific scenarios rather than a broad examination across multiple scenarios. Results from the main evaluation demonstrate that the proposed attack strategy can achieve high success rates even with low poisoning ratios, though stability remains a concern. Additionally, the inconsistent trends in model performance suggest that designers may struggle to detect the poisoned state of a model based on its performance alone. The transferred setting highlights the critical importance of model and feature knowledge for successful attack predictions, with feature knowledge proving particularly crucial. This insight prompts further investigation into model-agnostic mitigation methods and their effectiveness against the proposed attack strategy, with findings indicating that stability remains a significant concern for both attackers and defenders.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"149 ","pages":"Article 104187"},"PeriodicalIF":4.8000,"publicationDate":"2024-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Dynamic trigger-based attacks against next-generation IoT malware family classifiers\",\"authors\":\"Yefei Zhang ,&nbsp;Sadegh Torabi ,&nbsp;Jun Yan ,&nbsp;Chadi Assi\",\"doi\":\"10.1016/j.cose.2024.104187\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>The evolution of IoT malware and the effectiveness of defense strategies, e.g., leveraging malware family classification, have driven the development of advanced classification learning models. These models, particularly those that utilize model-extracted features, significantly enhance classification performance while minimizing the need for extensive expert knowledge from developers. However, a critical challenge lies in the interpretability of these learning models, which can obscure potential security risks. Among these risks are backdoor attacks, a sophisticated and deceptive threat where attackers induce malicious behaviors in the model under specific triggers.</div><div>In response to the growing need for integrity and reliability in these models, this work assesses the vulnerability of state-of-the-art IoT malware classification models to backdoor attacks. Given the complexities of attacking model-based classifiers, we propose a novel trigger generation framework, B-CTG, supported by a specialized training procedure. This framework enables B-CTG to dynamically poison or attack samples to achieve specific objectives. From an attacker’s perspective, the design and training of B-CTG incorporate knowledge from the IoT domain to ensure the attack’s effectiveness. We conduct experiments under two distinct knowledge assumptions: the main evaluation, which assesses the attack method’s performance when the attacker has limited control over the model training pipeline, and the transferred setting, which further explores the significance of knowledge in predicting attacks in real-world scenarios.</div><div>Our in-depth analysis focuses on attack performance in specific scenarios rather than a broad examination across multiple scenarios. Results from the main evaluation demonstrate that the proposed attack strategy can achieve high success rates even with low poisoning ratios, though stability remains a concern. Additionally, the inconsistent trends in model performance suggest that designers may struggle to detect the poisoned state of a model based on its performance alone. The transferred setting highlights the critical importance of model and feature knowledge for successful attack predictions, with feature knowledge proving particularly crucial. This insight prompts further investigation into model-agnostic mitigation methods and their effectiveness against the proposed attack strategy, with findings indicating that stability remains a significant concern for both attackers and defenders.</div></div>\",\"PeriodicalId\":51004,\"journal\":{\"name\":\"Computers & Security\",\"volume\":\"149 \",\"pages\":\"Article 104187\"},\"PeriodicalIF\":4.8000,\"publicationDate\":\"2024-11-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers & Security\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0167404824004929\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers & Security","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167404824004929","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

物联网恶意软件的演变和防御策略的有效性(如利用恶意软件家族分类)推动了高级分类学习模型的发展。这些模型,特别是那些利用模型提取特征的模型,可显著提高分类性能,同时最大限度地减少开发人员对大量专业知识的需求。然而,一个关键的挑战在于这些学习模型的可解释性,这可能会掩盖潜在的安全风险。这些风险包括后门攻击,这是一种复杂而具有欺骗性的威胁,攻击者会在特定触发条件下在模型中诱发恶意行为。为了满足对这些模型的完整性和可靠性日益增长的需求,这项工作评估了最先进的物联网恶意软件分类模型对后门攻击的脆弱性。鉴于攻击基于模型的分类器的复杂性,我们提出了一种新型触发器生成框架 B-CTG,并辅以专门的训练程序。该框架使 B-CTG 能够动态毒化或攻击样本,以实现特定目标。从攻击者的角度来看,B-CTG 的设计和训练结合了物联网领域的知识,以确保攻击的有效性。我们在两种不同的知识假设下进行了实验:一种是主要评估,评估攻击者对模型训练流水线的控制有限时攻击方法的性能;另一种是转移设置,进一步探索知识在预测真实世界场景中攻击的意义。主要评估结果表明,即使中毒率较低,所提出的攻击策略也能实现较高的成功率,但稳定性仍是一个问题。此外,模型性能的不一致趋势表明,设计人员可能很难仅仅根据模型的性能来检测其中毒状态。这种转移设置凸显了模型和特征知识对于成功预测攻击的重要性,而特征知识尤其关键。研究结果表明,稳定性仍然是攻击者和防御者都非常关注的问题。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Dynamic trigger-based attacks against next-generation IoT malware family classifiers
The evolution of IoT malware and the effectiveness of defense strategies, e.g., leveraging malware family classification, have driven the development of advanced classification learning models. These models, particularly those that utilize model-extracted features, significantly enhance classification performance while minimizing the need for extensive expert knowledge from developers. However, a critical challenge lies in the interpretability of these learning models, which can obscure potential security risks. Among these risks are backdoor attacks, a sophisticated and deceptive threat where attackers induce malicious behaviors in the model under specific triggers.
In response to the growing need for integrity and reliability in these models, this work assesses the vulnerability of state-of-the-art IoT malware classification models to backdoor attacks. Given the complexities of attacking model-based classifiers, we propose a novel trigger generation framework, B-CTG, supported by a specialized training procedure. This framework enables B-CTG to dynamically poison or attack samples to achieve specific objectives. From an attacker’s perspective, the design and training of B-CTG incorporate knowledge from the IoT domain to ensure the attack’s effectiveness. We conduct experiments under two distinct knowledge assumptions: the main evaluation, which assesses the attack method’s performance when the attacker has limited control over the model training pipeline, and the transferred setting, which further explores the significance of knowledge in predicting attacks in real-world scenarios.
Our in-depth analysis focuses on attack performance in specific scenarios rather than a broad examination across multiple scenarios. Results from the main evaluation demonstrate that the proposed attack strategy can achieve high success rates even with low poisoning ratios, though stability remains a concern. Additionally, the inconsistent trends in model performance suggest that designers may struggle to detect the poisoned state of a model based on its performance alone. The transferred setting highlights the critical importance of model and feature knowledge for successful attack predictions, with feature knowledge proving particularly crucial. This insight prompts further investigation into model-agnostic mitigation methods and their effectiveness against the proposed attack strategy, with findings indicating that stability remains a significant concern for both attackers and defenders.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Computers & Security
Computers & Security 工程技术-计算机:信息系统
CiteScore
12.40
自引率
7.10%
发文量
365
审稿时长
10.7 months
期刊介绍: Computers & Security is the most respected technical journal in the IT security field. With its high-profile editorial board and informative regular features and columns, the journal is essential reading for IT security professionals around the world. Computers & Security provides you with a unique blend of leading edge research and sound practical management advice. It is aimed at the professional involved with computer security, audit, control and data integrity in all sectors - industry, commerce and academia. Recognized worldwide as THE primary source of reference for applied research and technical expertise it is your first step to fully secure systems.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信