Bi-Level Spatial and Channel-aware Transformer for Learned Image Compression

Hamidreza Soltani, Erfan Ghasemi
{"title":"Bi-Level Spatial and Channel-aware Transformer for Learned Image Compression","authors":"Hamidreza Soltani, Erfan Ghasemi","doi":"arxiv-2408.03842","DOIUrl":null,"url":null,"abstract":"Recent advancements in learned image compression (LIC) methods have\ndemonstrated superior performance over traditional hand-crafted codecs. These\nlearning-based methods often employ convolutional neural networks (CNNs) or\nTransformer-based architectures. However, these nonlinear approaches frequently\noverlook the frequency characteristics of images, which limits their\ncompression efficiency. To address this issue, we propose a novel\nTransformer-based image compression method that enhances the transformation\nstage by considering frequency components within the feature map. Our method\nintegrates a novel Hybrid Spatial-Channel Attention Transformer Block (HSCATB),\nwhere a spatial-based branch independently handles high and low frequencies at\nthe attention layer, and a Channel-aware Self-Attention (CaSA) module captures\ninformation across channels, significantly improving compression performance.\nAdditionally, we introduce a Mixed Local-Global Feed Forward Network (MLGFFN)\nwithin the Transformer block to enhance the extraction of diverse and rich\ninformation, which is crucial for effective compression. These innovations\ncollectively improve the transformation's ability to project data into a more\ndecorrelated latent space, thereby boosting overall compression efficiency.\nExperimental results demonstrate that our framework surpasses state-of-the-art\nLIC methods in rate-distortion performance.","PeriodicalId":501082,"journal":{"name":"arXiv - MATH - Information Theory","volume":"8 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - MATH - Information Theory","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.03842","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Recent advancements in learned image compression (LIC) methods have demonstrated superior performance over traditional hand-crafted codecs. These learning-based methods often employ convolutional neural networks (CNNs) or Transformer-based architectures. However, these nonlinear approaches frequently overlook the frequency characteristics of images, which limits their compression efficiency. To address this issue, we propose a novel Transformer-based image compression method that enhances the transformation stage by considering frequency components within the feature map. Our method integrates a novel Hybrid Spatial-Channel Attention Transformer Block (HSCATB), where a spatial-based branch independently handles high and low frequencies at the attention layer, and a Channel-aware Self-Attention (CaSA) module captures information across channels, significantly improving compression performance. Additionally, we introduce a Mixed Local-Global Feed Forward Network (MLGFFN) within the Transformer block to enhance the extraction of diverse and rich information, which is crucial for effective compression. These innovations collectively improve the transformation's ability to project data into a more decorrelated latent space, thereby boosting overall compression efficiency. Experimental results demonstrate that our framework surpasses state-of-the-art LIC methods in rate-distortion performance.
用于学习型图像压缩的双级空间和信道感知变换器
学习图像压缩(LIC)方法的最新进展表明,其性能优于传统的手工编解码器。这些基于学习的方法通常采用卷积神经网络(CNN)或基于变换器的架构。然而,这些非线性方法经常忽略图像的频率特性,从而限制了其压缩效率。为了解决这个问题,我们提出了一种基于变换器的新型图像压缩方法,通过考虑特征图中的频率成分来增强变换阶段。我们的方法集成了新颖的混合空间-信道注意变换器模块(HSCATB),其中基于空间的分支可在注意层独立处理高频和低频,而信道感知自注意(CaSA)模块可捕捉跨信道的信息,从而显著提高压缩性能。此外,我们还在变换器模块中引入了混合本地-全局前馈网络(MLGFFN),以增强对多样化和丰富信息的提取,这对有效压缩至关重要。实验结果表明,我们的框架在速率-失真性能方面超越了最先进的传统方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信