{"title":"Bi-Level Spatial and Channel-aware Transformer for Learned Image Compression","authors":"Hamidreza Soltani, Erfan Ghasemi","doi":"arxiv-2408.03842","DOIUrl":null,"url":null,"abstract":"Recent advancements in learned image compression (LIC) methods have\ndemonstrated superior performance over traditional hand-crafted codecs. These\nlearning-based methods often employ convolutional neural networks (CNNs) or\nTransformer-based architectures. However, these nonlinear approaches frequently\noverlook the frequency characteristics of images, which limits their\ncompression efficiency. To address this issue, we propose a novel\nTransformer-based image compression method that enhances the transformation\nstage by considering frequency components within the feature map. Our method\nintegrates a novel Hybrid Spatial-Channel Attention Transformer Block (HSCATB),\nwhere a spatial-based branch independently handles high and low frequencies at\nthe attention layer, and a Channel-aware Self-Attention (CaSA) module captures\ninformation across channels, significantly improving compression performance.\nAdditionally, we introduce a Mixed Local-Global Feed Forward Network (MLGFFN)\nwithin the Transformer block to enhance the extraction of diverse and rich\ninformation, which is crucial for effective compression. These innovations\ncollectively improve the transformation's ability to project data into a more\ndecorrelated latent space, thereby boosting overall compression efficiency.\nExperimental results demonstrate that our framework surpasses state-of-the-art\nLIC methods in rate-distortion performance.","PeriodicalId":501082,"journal":{"name":"arXiv - MATH - Information Theory","volume":"8 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - MATH - Information Theory","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.03842","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Recent advancements in learned image compression (LIC) methods have
demonstrated superior performance over traditional hand-crafted codecs. These
learning-based methods often employ convolutional neural networks (CNNs) or
Transformer-based architectures. However, these nonlinear approaches frequently
overlook the frequency characteristics of images, which limits their
compression efficiency. To address this issue, we propose a novel
Transformer-based image compression method that enhances the transformation
stage by considering frequency components within the feature map. Our method
integrates a novel Hybrid Spatial-Channel Attention Transformer Block (HSCATB),
where a spatial-based branch independently handles high and low frequencies at
the attention layer, and a Channel-aware Self-Attention (CaSA) module captures
information across channels, significantly improving compression performance.
Additionally, we introduce a Mixed Local-Global Feed Forward Network (MLGFFN)
within the Transformer block to enhance the extraction of diverse and rich
information, which is crucial for effective compression. These innovations
collectively improve the transformation's ability to project data into a more
decorrelated latent space, thereby boosting overall compression efficiency.
Experimental results demonstrate that our framework surpasses state-of-the-art
LIC methods in rate-distortion performance.