{"title":"深度神经网络的移位空间-频谱卷积","authors":"Yuhao Xu, Hideki Nakayama","doi":"10.1145/3338533.3366575","DOIUrl":null,"url":null,"abstract":"Deep convolutional neural networks (CNNs) extract local features and learn spatial representations via convolutions in the spatial domain. Beyond the spatial information, some works also manage to capture the spectral information in the frequency domain by domain switching methods like discrete Fourier transform (DFT) and discrete cosine transform (DCT). However, most works only pay attention to a single domain, which is prone to ignoring other important features. In this work, we propose a novel network structure to combine spatial and spectral convolutions, and extract features in both spatial and frequency domains. The input channels are divided into two groups for spatial and spectral representations respectively, and then integrated for feature fusion. Meanwhile, we design a channel-shifting mechanism to ensure both spatial and spectral information of every channel are equally and adequately obtained throughout the deep networks. Experimental results demonstrate that compared with state-of-the-art CNN models in a single domain, our shifted spatial-spectral convolution based networks achieve better performance on image classification datasets including CIFAR10, CIFAR100 and SVHN, with considerably fewer parameters.","PeriodicalId":273086,"journal":{"name":"Proceedings of the ACM Multimedia Asia","volume":"2 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Shifted Spatial-Spectral Convolution for Deep Neural Networks\",\"authors\":\"Yuhao Xu, Hideki Nakayama\",\"doi\":\"10.1145/3338533.3366575\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep convolutional neural networks (CNNs) extract local features and learn spatial representations via convolutions in the spatial domain. Beyond the spatial information, some works also manage to capture the spectral information in the frequency domain by domain switching methods like discrete Fourier transform (DFT) and discrete cosine transform (DCT). However, most works only pay attention to a single domain, which is prone to ignoring other important features. In this work, we propose a novel network structure to combine spatial and spectral convolutions, and extract features in both spatial and frequency domains. The input channels are divided into two groups for spatial and spectral representations respectively, and then integrated for feature fusion. Meanwhile, we design a channel-shifting mechanism to ensure both spatial and spectral information of every channel are equally and adequately obtained throughout the deep networks. Experimental results demonstrate that compared with state-of-the-art CNN models in a single domain, our shifted spatial-spectral convolution based networks achieve better performance on image classification datasets including CIFAR10, CIFAR100 and SVHN, with considerably fewer parameters.\",\"PeriodicalId\":273086,\"journal\":{\"name\":\"Proceedings of the ACM Multimedia Asia\",\"volume\":\"2 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-12-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the ACM Multimedia Asia\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3338533.3366575\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the ACM Multimedia Asia","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3338533.3366575","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Shifted Spatial-Spectral Convolution for Deep Neural Networks
Deep convolutional neural networks (CNNs) extract local features and learn spatial representations via convolutions in the spatial domain. Beyond the spatial information, some works also manage to capture the spectral information in the frequency domain by domain switching methods like discrete Fourier transform (DFT) and discrete cosine transform (DCT). However, most works only pay attention to a single domain, which is prone to ignoring other important features. In this work, we propose a novel network structure to combine spatial and spectral convolutions, and extract features in both spatial and frequency domains. The input channels are divided into two groups for spatial and spectral representations respectively, and then integrated for feature fusion. Meanwhile, we design a channel-shifting mechanism to ensure both spatial and spectral information of every channel are equally and adequately obtained throughout the deep networks. Experimental results demonstrate that compared with state-of-the-art CNN models in a single domain, our shifted spatial-spectral convolution based networks achieve better performance on image classification datasets including CIFAR10, CIFAR100 and SVHN, with considerably fewer parameters.