Zhenyu Wang, Xuemei Xie, Hao Luo, Tao Huang, Weisheng Dong, Kai Xiong, Yongxu Liu, Xuyang Li, Fan Wang, Guangming Shi
{"title":"Compressing Vision Transformer from the View of Model Property in Frequency Domain","authors":"Zhenyu Wang, Xuemei Xie, Hao Luo, Tao Huang, Weisheng Dong, Kai Xiong, Yongxu Liu, Xuyang Li, Fan Wang, Guangming Shi","doi":"10.1007/s11263-025-02561-w","DOIUrl":null,"url":null,"abstract":"<p>Vision Transformers (ViTs) have recently demonstrated significant potential in computer vision, but their high computational costs remain a challenge. To address this limitation, various methods have been proposed to compress ViTs. Most approaches utilize spatial-domain information and adapt techniques from convolutional neural networks (CNNs) pruning to reduce channels or tokens. However, differences between ViTs and CNNs in the frequency domain make these methods vulnerable to noise in the spatial domain, potentially resulting in erroneous channel or token removal and substantial performance drops. Recent studies suggest that high-frequency signals carry limited information for ViTs, and that the self-attention mechanism functions similarly to a low-pass filter. Inspired by these insights, this paper proposes a joint compression method that leverages properties of ViTs in the frequency domain. Specifically, a metric called <i>L</i>ow-<i>F</i>requency <i>S</i>ensitivity (LFS) is used to accurately identify and compress redundant channels, while a token-merging approach, assisted by <i>L</i>ow-<i>F</i>requency <i>E</i>nergy (LFE), is introduced to reduce tokens. Through joint channel and token compression, the proposed method reduces the FLOPs of ViTs by over 50% with less than a 1% performance drop on ImageNet-1K and achieves approximately a 40% reduction in FLOPs for dense prediction tasks, including object detection and semantic segmentation.</p>","PeriodicalId":13752,"journal":{"name":"International Journal of Computer Vision","volume":"45 1","pages":""},"PeriodicalIF":9.3000,"publicationDate":"2025-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Computer Vision","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s11263-025-02561-w","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Vision Transformers (ViTs) have recently demonstrated significant potential in computer vision, but their high computational costs remain a challenge. To address this limitation, various methods have been proposed to compress ViTs. Most approaches utilize spatial-domain information and adapt techniques from convolutional neural networks (CNNs) pruning to reduce channels or tokens. However, differences between ViTs and CNNs in the frequency domain make these methods vulnerable to noise in the spatial domain, potentially resulting in erroneous channel or token removal and substantial performance drops. Recent studies suggest that high-frequency signals carry limited information for ViTs, and that the self-attention mechanism functions similarly to a low-pass filter. Inspired by these insights, this paper proposes a joint compression method that leverages properties of ViTs in the frequency domain. Specifically, a metric called Low-Frequency Sensitivity (LFS) is used to accurately identify and compress redundant channels, while a token-merging approach, assisted by Low-Frequency Energy (LFE), is introduced to reduce tokens. Through joint channel and token compression, the proposed method reduces the FLOPs of ViTs by over 50% with less than a 1% performance drop on ImageNet-1K and achieves approximately a 40% reduction in FLOPs for dense prediction tasks, including object detection and semantic segmentation.
期刊介绍:
The International Journal of Computer Vision (IJCV) serves as a platform for sharing new research findings in the rapidly growing field of computer vision. It publishes 12 issues annually and presents high-quality, original contributions to the science and engineering of computer vision. The journal encompasses various types of articles to cater to different research outputs.
Regular articles, which span up to 25 journal pages, focus on significant technical advancements that are of broad interest to the field. These articles showcase substantial progress in computer vision.
Short articles, limited to 10 pages, offer a swift publication path for novel research outcomes. They provide a quicker means for sharing new findings with the computer vision community.
Survey articles, comprising up to 30 pages, offer critical evaluations of the current state of the art in computer vision or offer tutorial presentations of relevant topics. These articles provide comprehensive and insightful overviews of specific subject areas.
In addition to technical articles, the journal also includes book reviews, position papers, and editorials by prominent scientific figures. These contributions serve to complement the technical content and provide valuable perspectives.
The journal encourages authors to include supplementary material online, such as images, video sequences, data sets, and software. This additional material enhances the understanding and reproducibility of the published research.
Overall, the International Journal of Computer Vision is a comprehensive publication that caters to researchers in this rapidly growing field. It covers a range of article types, offers additional online resources, and facilitates the dissemination of impactful research.