{"title":"视觉物体跟踪的频域后门攻击","authors":"Jiahao Luo","doi":"10.52783/jes.5089","DOIUrl":null,"url":null,"abstract":"Visual object tracking(VOT)is a key topic in computer vision tasks. It serves as an essential component of various advanced problems in the field, such as motion analysis, event detection, and activity understanding. VOT finds extensive applications, including human-computer interaction in video, video surveillance, and autonomous driving. Due to the rapid development of deep neural networks(DNNs), VOT has achieved unprecedented progress. However, the lack of interpretability in DNNs has introduced certain security risks, notably backdoor attacks. A neural network backdoor attack involves an attacker injecting hidden backdoors into the network, making the compromised model behave normally with regular inputs but produce predetermined outputs when specific conditions set by the attacker are met. Existing triggers for VOT backdoor attacks are poorly concealed. We leverage the sensitivity of DNNs to small perturbations to generate pixel-level indistinguishable perturbations in the frequency domain, thus proposing an invisible backdoor attack. This method ensures both effectiveness and concealment. Additionally, we employ a differential evolution(DE) algorithm to optimize trigger generation, thereby reducing the attacker's required capabilities. We have validated the effectiveness of the attack across various datasets and models.","PeriodicalId":44451,"journal":{"name":"Journal of Electrical Systems","volume":null,"pages":null},"PeriodicalIF":0.5000,"publicationDate":"2024-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Frequency Domain Backdoor Attacks for Visual Object Tracking\",\"authors\":\"Jiahao Luo\",\"doi\":\"10.52783/jes.5089\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Visual object tracking(VOT)is a key topic in computer vision tasks. It serves as an essential component of various advanced problems in the field, such as motion analysis, event detection, and activity understanding. VOT finds extensive applications, including human-computer interaction in video, video surveillance, and autonomous driving. Due to the rapid development of deep neural networks(DNNs), VOT has achieved unprecedented progress. However, the lack of interpretability in DNNs has introduced certain security risks, notably backdoor attacks. A neural network backdoor attack involves an attacker injecting hidden backdoors into the network, making the compromised model behave normally with regular inputs but produce predetermined outputs when specific conditions set by the attacker are met. Existing triggers for VOT backdoor attacks are poorly concealed. We leverage the sensitivity of DNNs to small perturbations to generate pixel-level indistinguishable perturbations in the frequency domain, thus proposing an invisible backdoor attack. This method ensures both effectiveness and concealment. Additionally, we employ a differential evolution(DE) algorithm to optimize trigger generation, thereby reducing the attacker's required capabilities. We have validated the effectiveness of the attack across various datasets and models.\",\"PeriodicalId\":44451,\"journal\":{\"name\":\"Journal of Electrical Systems\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.5000,\"publicationDate\":\"2024-07-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Electrical Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.52783/jes.5089\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Electrical Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.52783/jes.5089","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
摘要
视觉物体跟踪(VOT)是计算机视觉任务中的一个关键主题。它是该领域各种高级问题(如运动分析、事件检测和活动理解)的重要组成部分。VOT 应用广泛,包括视频中的人机交互、视频监控和自动驾驶。由于深度神经网络(DNN)的快速发展,VOT 取得了前所未有的进步。然而,由于深度神经网络缺乏可解释性,因此带来了一定的安全风险,特别是后门攻击。神经网络后门攻击是指攻击者在网络中注入隐藏的后门,使被攻击的模型在正常输入的情况下表现正常,但在满足攻击者设定的特定条件时产生预定的输出。现有的 VOT 后门攻击触发器隐蔽性很差。我们利用 DNN 对微小扰动的敏感性,在频域生成像素级的不可分扰动,从而提出了一种隐形后门攻击。这种方法同时确保了有效性和隐蔽性。此外,我们还采用了微分进化(DE)算法来优化触发器的生成,从而降低攻击者所需的能力。我们在各种数据集和模型中验证了这种攻击的有效性。
Frequency Domain Backdoor Attacks for Visual Object Tracking
Visual object tracking(VOT)is a key topic in computer vision tasks. It serves as an essential component of various advanced problems in the field, such as motion analysis, event detection, and activity understanding. VOT finds extensive applications, including human-computer interaction in video, video surveillance, and autonomous driving. Due to the rapid development of deep neural networks(DNNs), VOT has achieved unprecedented progress. However, the lack of interpretability in DNNs has introduced certain security risks, notably backdoor attacks. A neural network backdoor attack involves an attacker injecting hidden backdoors into the network, making the compromised model behave normally with regular inputs but produce predetermined outputs when specific conditions set by the attacker are met. Existing triggers for VOT backdoor attacks are poorly concealed. We leverage the sensitivity of DNNs to small perturbations to generate pixel-level indistinguishable perturbations in the frequency domain, thus proposing an invisible backdoor attack. This method ensures both effectiveness and concealment. Additionally, we employ a differential evolution(DE) algorithm to optimize trigger generation, thereby reducing the attacker's required capabilities. We have validated the effectiveness of the attack across various datasets and models.