Zhenchao Ma;Yixiao Wang;Hamid Reza Tohidypour;Panos Nasiopoulos;Victor C. M. Leung
{"title":"Enhancing Image Quality by Reducing Compression Artifacts Using Dynamic Window Swin Transformer","authors":"Zhenchao Ma;Yixiao Wang;Hamid Reza Tohidypour;Panos Nasiopoulos;Victor C. M. Leung","doi":"10.1109/JETCAS.2024.3392868","DOIUrl":null,"url":null,"abstract":"Video/image compression codecs utilize the characteristics of the human visual system and its varying sensitivity to certain frequencies, brightness, contrast, and colors to achieve high compression. Inevitably, compression introduces undesirable visual artifacts. As compression standards improve, restoring image quality becomes more challenging. Recently, deep learning based models, especially transformer-based image restoration models, have emerged as a promising approach for reducing compression artifacts, demonstrating very good restoration performance. However, all the proposed transformer based restoration methods use a same fixed window size, confining pixel dependencies in fixed areas. In this paper, we propose a new and unique image restoration method that addresses the shortcoming of existing methods by first introducing a content adaptive dynamic window that is applied to self-attention layers which in turn are weighted by our channel and spatial attention module utilized in Swin Transformer to mainly capture long and medium range pixel dependencies. In addition, local dependencies are further enhanced by integrating a CNN based network inside the Swin Transformer Block to process the image augmented by our self-attention module. Performance evaluations using images compressed by one of the latest compression standards, namely the Versatile Video Coding (VVC), when measured in Peak Signal-to-Noise Ratio (PSNR), our proposed approach achieves an average gain of 1.32dB on three different benchmark datasets for VVC compression artifacts reduction. Additionally, our proposed approach improves the visual quality of compressed images by an average of 2.7% in terms of Video Multimethod Assessment Fusion (VMAF).","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":null,"pages":null},"PeriodicalIF":3.7000,"publicationDate":"2024-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10508045/","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Video/image compression codecs utilize the characteristics of the human visual system and its varying sensitivity to certain frequencies, brightness, contrast, and colors to achieve high compression. Inevitably, compression introduces undesirable visual artifacts. As compression standards improve, restoring image quality becomes more challenging. Recently, deep learning based models, especially transformer-based image restoration models, have emerged as a promising approach for reducing compression artifacts, demonstrating very good restoration performance. However, all the proposed transformer based restoration methods use a same fixed window size, confining pixel dependencies in fixed areas. In this paper, we propose a new and unique image restoration method that addresses the shortcoming of existing methods by first introducing a content adaptive dynamic window that is applied to self-attention layers which in turn are weighted by our channel and spatial attention module utilized in Swin Transformer to mainly capture long and medium range pixel dependencies. In addition, local dependencies are further enhanced by integrating a CNN based network inside the Swin Transformer Block to process the image augmented by our self-attention module. Performance evaluations using images compressed by one of the latest compression standards, namely the Versatile Video Coding (VVC), when measured in Peak Signal-to-Noise Ratio (PSNR), our proposed approach achieves an average gain of 1.32dB on three different benchmark datasets for VVC compression artifacts reduction. Additionally, our proposed approach improves the visual quality of compressed images by an average of 2.7% in terms of Video Multimethod Assessment Fusion (VMAF).
期刊介绍:
The IEEE Journal on Emerging and Selected Topics in Circuits and Systems is published quarterly and solicits, with particular emphasis on emerging areas, special issues on topics that cover the entire scope of the IEEE Circuits and Systems (CAS) Society, namely the theory, analysis, design, tools, and implementation of circuits and systems, spanning their theoretical foundations, applications, and architectures for signal and information processing.