Mingyang Lei;Hong Song;Jingfan Fan;Deqiang Xiao;Danni Ai;Ying Gu;Jian Yang
{"title":"GAA:用于物体追踪的幽灵对抗攻击","authors":"Mingyang Lei;Hong Song;Jingfan Fan;Deqiang Xiao;Danni Ai;Ying Gu;Jian Yang","doi":"10.1109/TETCI.2024.3369403","DOIUrl":null,"url":null,"abstract":"Adversarial attack of convolutional neural networks (CNN) is a technique for deceiving models with perturbations, which provides a way to evaluate the robustness of models. Adversarial attack research has primarily focused on single images. However, videos are more widely used. The existing attack methods generally require iterative optimization on different video sequences with high time-consuming. In this paper, we propose a simple and effective approach for attacking video sequences, called Ghost Adversarial Attack (GAA), to greatly degrade the tracking performance of the state-of-the-art (SOTA) CNN-based trackers with the minimum ghost perturbations. Considering the timeliness of the attack, we only generate the ghost adversarial example once with a novel ghost-generator and use a less computable attack way in subsequent frames. The ghost-generator is used to extract the target region and generate the indistinguishable ghost noise of the target, hence misleading the tracker. Moreover, we propose a novel combined loss that includes the content loss, the ghost loss, and the transferred-fixed loss, which are used in different parts of the proposed method. The combined loss can help to generate similar adversarial examples with slight noises, like a ghost of the real target. Experiments were conducted on six benchmark datasets (UAV123, UAV20L, NFS, LaSOT, OTB50, and OTB100). The experimental results indicate that the ghost adversarial examples produced by GAA are well stealthy while remaining effective in fooling SOTA trackers with high transferability. The GAA can reduce the tracking success rate by an average of 66.6% and the precision rate by an average of 68.3%.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":null,"pages":null},"PeriodicalIF":5.3000,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"GAA: Ghost Adversarial Attack for Object Tracking\",\"authors\":\"Mingyang Lei;Hong Song;Jingfan Fan;Deqiang Xiao;Danni Ai;Ying Gu;Jian Yang\",\"doi\":\"10.1109/TETCI.2024.3369403\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Adversarial attack of convolutional neural networks (CNN) is a technique for deceiving models with perturbations, which provides a way to evaluate the robustness of models. Adversarial attack research has primarily focused on single images. However, videos are more widely used. The existing attack methods generally require iterative optimization on different video sequences with high time-consuming. In this paper, we propose a simple and effective approach for attacking video sequences, called Ghost Adversarial Attack (GAA), to greatly degrade the tracking performance of the state-of-the-art (SOTA) CNN-based trackers with the minimum ghost perturbations. Considering the timeliness of the attack, we only generate the ghost adversarial example once with a novel ghost-generator and use a less computable attack way in subsequent frames. The ghost-generator is used to extract the target region and generate the indistinguishable ghost noise of the target, hence misleading the tracker. Moreover, we propose a novel combined loss that includes the content loss, the ghost loss, and the transferred-fixed loss, which are used in different parts of the proposed method. The combined loss can help to generate similar adversarial examples with slight noises, like a ghost of the real target. Experiments were conducted on six benchmark datasets (UAV123, UAV20L, NFS, LaSOT, OTB50, and OTB100). The experimental results indicate that the ghost adversarial examples produced by GAA are well stealthy while remaining effective in fooling SOTA trackers with high transferability. The GAA can reduce the tracking success rate by an average of 66.6% and the precision rate by an average of 68.3%.\",\"PeriodicalId\":13135,\"journal\":{\"name\":\"IEEE Transactions on Emerging Topics in Computational Intelligence\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":5.3000,\"publicationDate\":\"2024-03-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Emerging Topics in Computational Intelligence\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10466775/\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Emerging Topics in Computational Intelligence","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10466775/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Adversarial attack of convolutional neural networks (CNN) is a technique for deceiving models with perturbations, which provides a way to evaluate the robustness of models. Adversarial attack research has primarily focused on single images. However, videos are more widely used. The existing attack methods generally require iterative optimization on different video sequences with high time-consuming. In this paper, we propose a simple and effective approach for attacking video sequences, called Ghost Adversarial Attack (GAA), to greatly degrade the tracking performance of the state-of-the-art (SOTA) CNN-based trackers with the minimum ghost perturbations. Considering the timeliness of the attack, we only generate the ghost adversarial example once with a novel ghost-generator and use a less computable attack way in subsequent frames. The ghost-generator is used to extract the target region and generate the indistinguishable ghost noise of the target, hence misleading the tracker. Moreover, we propose a novel combined loss that includes the content loss, the ghost loss, and the transferred-fixed loss, which are used in different parts of the proposed method. The combined loss can help to generate similar adversarial examples with slight noises, like a ghost of the real target. Experiments were conducted on six benchmark datasets (UAV123, UAV20L, NFS, LaSOT, OTB50, and OTB100). The experimental results indicate that the ghost adversarial examples produced by GAA are well stealthy while remaining effective in fooling SOTA trackers with high transferability. The GAA can reduce the tracking success rate by an average of 66.6% and the precision rate by an average of 68.3%.
期刊介绍:
The IEEE Transactions on Emerging Topics in Computational Intelligence (TETCI) publishes original articles on emerging aspects of computational intelligence, including theory, applications, and surveys.
TETCI is an electronics only publication. TETCI publishes six issues per year.
Authors are encouraged to submit manuscripts in any emerging topic in computational intelligence, especially nature-inspired computing topics not covered by other IEEE Computational Intelligence Society journals. A few such illustrative examples are glial cell networks, computational neuroscience, Brain Computer Interface, ambient intelligence, non-fuzzy computing with words, artificial life, cultural learning, artificial endocrine networks, social reasoning, artificial hormone networks, computational intelligence for the IoT and Smart-X technologies.