{"title":"Mixup Virtual Adversarial Training for Robust Vision Transformers","authors":"Weili Shi;Sheng Li","doi":"10.1109/TBDATA.2024.3453754","DOIUrl":null,"url":null,"abstract":"Inspired by the success of transformers in natural language processing, vision transformers have been proposed to address a wide range of computer vision tasks, such as image classification, object detection and image segmentation, and they have achieved very promising performance. However, the robustness of vision transformers has been relatively under-explored. Recent studies have revealed that pre-trained vision transformers are also vulnerable to white-box adversarial attacks on the downstream image classification tasks. The adversarial attacks (e.g., FGSM and PGD) designed for convolutional neural networks (CNNs) can also cause severe performance drop for vision transformers. In this paper, we evaluate the robustness of vision transformers fine-tuned with the off-the-shelf methods under adversarial attacks on CIFAR-10 and CIFAR-100 and further propose a data-augmented virtual adversarial training approach called MixVAT, which is able to enhance the robustness of pre-trained vision transformers against adversarial attacks on the downstream tasks with the unlabelled data. Extensive results on multiple datasets demonstrate the superiority of our approach over baselines on adversarial robustness, without compromising generalization ability of the model.","PeriodicalId":13106,"journal":{"name":"IEEE Transactions on Big Data","volume":"11 3","pages":"1309-1320"},"PeriodicalIF":7.5000,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Big Data","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10664014/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Inspired by the success of transformers in natural language processing, vision transformers have been proposed to address a wide range of computer vision tasks, such as image classification, object detection and image segmentation, and they have achieved very promising performance. However, the robustness of vision transformers has been relatively under-explored. Recent studies have revealed that pre-trained vision transformers are also vulnerable to white-box adversarial attacks on the downstream image classification tasks. The adversarial attacks (e.g., FGSM and PGD) designed for convolutional neural networks (CNNs) can also cause severe performance drop for vision transformers. In this paper, we evaluate the robustness of vision transformers fine-tuned with the off-the-shelf methods under adversarial attacks on CIFAR-10 and CIFAR-100 and further propose a data-augmented virtual adversarial training approach called MixVAT, which is able to enhance the robustness of pre-trained vision transformers against adversarial attacks on the downstream tasks with the unlabelled data. Extensive results on multiple datasets demonstrate the superiority of our approach over baselines on adversarial robustness, without compromising generalization ability of the model.
期刊介绍:
The IEEE Transactions on Big Data publishes peer-reviewed articles focusing on big data. These articles present innovative research ideas and application results across disciplines, including novel theories, algorithms, and applications. Research areas cover a wide range, such as big data analytics, visualization, curation, management, semantics, infrastructure, standards, performance analysis, intelligence extraction, scientific discovery, security, privacy, and legal issues specific to big data. The journal also prioritizes applications of big data in fields generating massive datasets.