Jingjie Wang, Xiang Wei, Siyang Lu, Mingquan Wang, Xiaoyu Liu, Wei Lu
{"title":"Redesign Visual Transformer For Small Datasets","authors":"Jingjie Wang, Xiang Wei, Siyang Lu, Mingquan Wang, Xiaoyu Liu, Wei Lu","doi":"10.1109/SmartWorld-UIC-ATC-ScalCom-DigitalTwin-PriComp-Metaverse56740.2022.00077","DOIUrl":null,"url":null,"abstract":"Nowadays, the self-attention mechanism has become a resound of visual feature extraction along with convolution. The transformer network composed of self-attention has developed rapidly and made remarkable achievements in visual tasks. The self-attention shows the potential to replace convolution as the primary method of visual feature extraction in ubiquitous intelligence. Nevertheless, the development of the Visual Transformer still suffer from the following problems: a) The self-attention mechanism has a low inductive bias, which leads to large data demand and a high training cost. b) The Transformer backbone network cannot adapt well to the low visual information density and performs unsatisfactorily under low resolution and small-scale datasets. To tackle the abovementioned two problems, this paper proposes a novel algorithm based on the mature Visual Transformer architecture, which is dedicated to exploring the performance potential of the Transformer network and its kernel self-attention mechanism on small-scale datasets. Specifically, we first propose a network architecture equipped with multi-coordination strategy to solve the self-attention degradation problem inherent in the existing Transformer architecture. Secondly, we introduce consistent regularization into the Transformer to make the self-attention mechanism acquire more reliable feature representation ability in the case of insufficient visual features. In the experiments, CSwin Transformer, the mainstream visual model, is selected to verify the effectiveness of the proposed method on the prevalent small datasets, and superior results are achieved. In particular, without pre-training, our accuracy on the CIFAR-100 dataset is improved by 1.24% compared to CSwin.","PeriodicalId":43791,"journal":{"name":"Scalable Computing-Practice and Experience","volume":null,"pages":null},"PeriodicalIF":0.9000,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Scalable Computing-Practice and Experience","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SmartWorld-UIC-ATC-ScalCom-DigitalTwin-PriComp-Metaverse56740.2022.00077","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0
Abstract
Nowadays, the self-attention mechanism has become a resound of visual feature extraction along with convolution. The transformer network composed of self-attention has developed rapidly and made remarkable achievements in visual tasks. The self-attention shows the potential to replace convolution as the primary method of visual feature extraction in ubiquitous intelligence. Nevertheless, the development of the Visual Transformer still suffer from the following problems: a) The self-attention mechanism has a low inductive bias, which leads to large data demand and a high training cost. b) The Transformer backbone network cannot adapt well to the low visual information density and performs unsatisfactorily under low resolution and small-scale datasets. To tackle the abovementioned two problems, this paper proposes a novel algorithm based on the mature Visual Transformer architecture, which is dedicated to exploring the performance potential of the Transformer network and its kernel self-attention mechanism on small-scale datasets. Specifically, we first propose a network architecture equipped with multi-coordination strategy to solve the self-attention degradation problem inherent in the existing Transformer architecture. Secondly, we introduce consistent regularization into the Transformer to make the self-attention mechanism acquire more reliable feature representation ability in the case of insufficient visual features. In the experiments, CSwin Transformer, the mainstream visual model, is selected to verify the effectiveness of the proposed method on the prevalent small datasets, and superior results are achieved. In particular, without pre-training, our accuracy on the CIFAR-100 dataset is improved by 1.24% compared to CSwin.
期刊介绍:
The area of scalable computing has matured and reached a point where new issues and trends require a professional forum. SCPE will provide this avenue by publishing original refereed papers that address the present as well as the future of parallel and distributed computing. The journal will focus on algorithm development, implementation and execution on real-world parallel architectures, and application of parallel and distributed computing to the solution of real-life problems.