WLA-Net: A Whole New Light-weight Architecture For Visual Task

Liuhao Yu, Danyang Yao, Mei Fang, Lei Zhu
{"title":"WLA-Net: A Whole New Light-weight Architecture For Visual Task","authors":"Liuhao Yu, Danyang Yao, Mei Fang, Lei Zhu","doi":"10.1145/3574131.3574458","DOIUrl":null,"url":null,"abstract":"In this paper, we introduce WLA-Net, a whole new convolutional networks that have smaller parameters and FLOPs model. WLA-Net are based on a cross architecture that uses mechanism of attention and Residual block to build light deep neural networks. While improving the classification accuracy, the parameters of model is reduced, make the model more lightweight and improving resource utilization. A lightweight convolution module is designed in the network that can perform image classification tasks accurately and efficiently while introducing a module that large Convolution attention to improve image classification accuracy. In addition, an new AttentionModule is proposed, which mines information aggregations in the channel direction as much as possible to extract more efficient depth features. It can effectively fuse the features of the channels in the image to obtain higher accuracy. At the same time, a new residual structure is designed to fuse the information between feature channels to make it more closely related. The image classification accuracy of the model is verified on the large natural images datasets. Experimental results show that the proposed method has SOTA performance.","PeriodicalId":111802,"journal":{"name":"Proceedings of the 18th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"102 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 18th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3574131.3574458","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

In this paper, we introduce WLA-Net, a whole new convolutional networks that have smaller parameters and FLOPs model. WLA-Net are based on a cross architecture that uses mechanism of attention and Residual block to build light deep neural networks. While improving the classification accuracy, the parameters of model is reduced, make the model more lightweight and improving resource utilization. A lightweight convolution module is designed in the network that can perform image classification tasks accurately and efficiently while introducing a module that large Convolution attention to improve image classification accuracy. In addition, an new AttentionModule is proposed, which mines information aggregations in the channel direction as much as possible to extract more efficient depth features. It can effectively fuse the features of the channels in the image to obtain higher accuracy. At the same time, a new residual structure is designed to fuse the information between feature channels to make it more closely related. The image classification accuracy of the model is verified on the large natural images datasets. Experimental results show that the proposed method has SOTA performance.
WLA-Net:一个全新的用于视觉任务的轻量级架构
在本文中,我们介绍了WLA-Net,一个全新的卷积网络,具有更小的参数和FLOPs模型。WLA-Net是一种基于交叉体系结构的轻量级深度神经网络,它利用注意力和残差块机制来构建轻量级深度神经网络。在提高分类精度的同时,减少了模型的参数,使模型更加轻量化,提高了资源利用率。在网络中设计了一个轻量级的卷积模块,可以准确高效地完成图像分类任务,同时引入一个大卷积关注模块,提高图像分类精度。此外,提出了一种新的AttentionModule,尽可能挖掘通道方向上的信息聚合,提取更有效的深度特征。它可以有效地融合图像中通道的特征,以获得更高的精度。同时,设计了一种新的残差结构,融合特征通道之间的信息,使其联系更加紧密。在大型自然图像数据集上验证了该模型的图像分类精度。实验结果表明,该方法具有SOTA性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信