IEEE Journal on Emerging and Selected Topics in Circuits and Systems最新文献

筛选
英文 中文
FVIFormer: Flow-Guided Global-Local Aggregation Transformer Network for Video Inpainting FVIFormer:用于视频绘制的流量引导全局-本地聚合变换器网络
IF 3.7 2区 工程技术
IEEE Journal on Emerging and Selected Topics in Circuits and Systems Pub Date : 2024-04-25 DOI: 10.1109/JETCAS.2024.3392972
Weiqing Yan;Yiqiu Sun;Guanghui Yue;Wei Zhou;Hantao Liu
{"title":"FVIFormer: Flow-Guided Global-Local Aggregation Transformer Network for Video Inpainting","authors":"Weiqing Yan;Yiqiu Sun;Guanghui Yue;Wei Zhou;Hantao Liu","doi":"10.1109/JETCAS.2024.3392972","DOIUrl":"10.1109/JETCAS.2024.3392972","url":null,"abstract":"Video inpainting has been extensively used in recent years. Established works usually utilise the similarity between the missing region and its surrounding features to inpaint in the visually damaged content in a multi-stage manner. However, due to the complexity of the video content, it may result in the destruction of structural information of objects within the video. In addition to this, the presence of moving objects in the damaged regions of the video can further increase the difficulty of this work. To address these issues, we propose a flow-guided global-Local aggregation Transformer network for video inpainting. First, we use a pre-trained optical flow complementation network to repair the defective optical flow of video frames. Then, we propose a content inpainting module, which use the complete optical flow as a guide, and propagate the global content across the video frames using efficient temporal and spacial Transformer to inpaint in the corrupted regions of the video. Finally, we propose a structural rectification module to enhance the coherence of content around the missing regions via combining the extracted local and global features. In addition, considering the efficiency of the overall framework, we also optimized the self-attention mechanism to improve the speed of training and testing via depth-wise separable encoding. We validate the effectiveness of our method on the YouTube-VOS and DAVIS video datasets. Extensive experiment results demonstrate the effectiveness of our approach in edge-complementing video content that has undergone stabilisation algorithms.","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"14 2","pages":"235-244"},"PeriodicalIF":3.7,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140805941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Image Quality by Reducing Compression Artifacts Using Dynamic Window Swin Transformer 利用动态窗口斯温变换器减少压缩伪影,提高图像质量
IF 3.7 2区 工程技术
IEEE Journal on Emerging and Selected Topics in Circuits and Systems Pub Date : 2024-04-24 DOI: 10.1109/JETCAS.2024.3392868
Zhenchao Ma;Yixiao Wang;Hamid Reza Tohidypour;Panos Nasiopoulos;Victor C. M. Leung
{"title":"Enhancing Image Quality by Reducing Compression Artifacts Using Dynamic Window Swin Transformer","authors":"Zhenchao Ma;Yixiao Wang;Hamid Reza Tohidypour;Panos Nasiopoulos;Victor C. M. Leung","doi":"10.1109/JETCAS.2024.3392868","DOIUrl":"10.1109/JETCAS.2024.3392868","url":null,"abstract":"Video/image compression codecs utilize the characteristics of the human visual system and its varying sensitivity to certain frequencies, brightness, contrast, and colors to achieve high compression. Inevitably, compression introduces undesirable visual artifacts. As compression standards improve, restoring image quality becomes more challenging. Recently, deep learning based models, especially transformer-based image restoration models, have emerged as a promising approach for reducing compression artifacts, demonstrating very good restoration performance. However, all the proposed transformer based restoration methods use a same fixed window size, confining pixel dependencies in fixed areas. In this paper, we propose a new and unique image restoration method that addresses the shortcoming of existing methods by first introducing a content adaptive dynamic window that is applied to self-attention layers which in turn are weighted by our channel and spatial attention module utilized in Swin Transformer to mainly capture long and medium range pixel dependencies. In addition, local dependencies are further enhanced by integrating a CNN based network inside the Swin Transformer Block to process the image augmented by our self-attention module. Performance evaluations using images compressed by one of the latest compression standards, namely the Versatile Video Coding (VVC), when measured in Peak Signal-to-Noise Ratio (PSNR), our proposed approach achieves an average gain of 1.32dB on three different benchmark datasets for VVC compression artifacts reduction. Additionally, our proposed approach improves the visual quality of compressed images by an average of 2.7% in terms of Video Multimethod Assessment Fusion (VMAF).","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"14 2","pages":"275-285"},"PeriodicalIF":3.7,"publicationDate":"2024-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140805942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Low Latency Variational Autoencoder on FPGAs FPGA 上的低延迟变异自动编码器
IF 3.7 2区 工程技术
IEEE Journal on Emerging and Selected Topics in Circuits and Systems Pub Date : 2024-04-16 DOI: 10.1109/JETCAS.2024.3389660
Zhiqiang Que;Minghao Zhang;Hongxiang Fan;He Li;Ce Guo;Wayne Luk
{"title":"Low Latency Variational Autoencoder on FPGAs","authors":"Zhiqiang Que;Minghao Zhang;Hongxiang Fan;He Li;Ce Guo;Wayne Luk","doi":"10.1109/JETCAS.2024.3389660","DOIUrl":"10.1109/JETCAS.2024.3389660","url":null,"abstract":"Variational Autoencoders (VAEs) are at the forefront of generative model research, combining probabilistic theory with neural networks to learn intricate data structures and synthesize complex data. However, designs targeting VAEs are computationally intensive, often involving high latency that precludes real-time operations. This paper introduces a novel low-latency hardware pipeline on FPGAs for fully-stochastic VAE inference. We propose a custom Gaussian sampling layer and a layer-wise tailored pipeline architecture which, for the first time in accelerating VAEs, are optimized through High-Level Synthesis (HLS). Evaluation results show that our VAE design is respectively 82 times and 208 times faster than CPU and GPU implementations. When compared with a state-of-the-art FPGA-based autoencoder design for anomaly detection, our VAE design is 61 times faster with the same model accuracy, which shows that our approach contributes to high performance and low latency FPGA-based VAE systems.","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"14 2","pages":"323-333"},"PeriodicalIF":3.7,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140615157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CGVC-T: Contextual Generative Video Compression With Transformers CGVC-T:使用变形器的上下文生成式视频压缩
IF 3.7 2区 工程技术
IEEE Journal on Emerging and Selected Topics in Circuits and Systems Pub Date : 2024-04-10 DOI: 10.1109/JETCAS.2024.3387301
Pengli Du;Ying Liu;Nam Ling
{"title":"CGVC-T: Contextual Generative Video Compression With Transformers","authors":"Pengli Du;Ying Liu;Nam Ling","doi":"10.1109/JETCAS.2024.3387301","DOIUrl":"10.1109/JETCAS.2024.3387301","url":null,"abstract":"With the high demands for video streaming, recent years have witnessed a growing interest in utilizing deep learning for video compression. Most existing neural video compression approaches adopt the predictive residue coding framework, which is sub-optimal in removing redundancy across frames. In addition, purely minimizing the pixel-wise differences between the raw frame and the decompressed frame is ineffective in improving the perceptual quality of videos. In this paper, we propose a contextual generative video compression method with transformers (CGVC-T), which adopts generative adversarial networks (GAN) for perceptual quality enhancement and applies contextual coding to improve coding efficiency. Besides, we employ a hybrid transformer-convolution structure in the auto-encoders of the CGVC-T, which learns both global and local features within video frames to remove temporal and spatial redundancy. Furthermore, we introduce novel entropy models to estimate the probability distributions of the compressed latent representations, so that the bit rates required for transmitting the compressed video are decreased. The experiments on HEVC, UVG, and MCL-JCV datasets demonstrate that the perceptual quality of our CGVC-T in terms of FID, KID, and LPIPS scores surpasses state-of-the-art learned video codecs, the industrial video codecs x264 and x265, as well as the official reference software JM, HM, and VTM. Our CGVC-T also offers superior DISTS scores among all compared learned video codecs.","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"14 2","pages":"209-223"},"PeriodicalIF":3.7,"publicationDate":"2024-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140571482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Physically Guided Generative Adversarial Network for Holographic 3D Content Generation From Multi-View Light Field 从多视角光场生成全息三维内容的物理引导生成对抗网络
IF 3.7 2区 工程技术
IEEE Journal on Emerging and Selected Topics in Circuits and Systems Pub Date : 2024-04-09 DOI: 10.1109/JETCAS.2024.3386672
Yunhui Zeng;Zhenwei Long;Yawen Qiu;Shiyi Wang;Junjie Wei;Xin Jin;Hongkun Cao;Zhiheng Li
{"title":"Physically Guided Generative Adversarial Network for Holographic 3D Content Generation From Multi-View Light Field","authors":"Yunhui Zeng;Zhenwei Long;Yawen Qiu;Shiyi Wang;Junjie Wei;Xin Jin;Hongkun Cao;Zhiheng Li","doi":"10.1109/JETCAS.2024.3386672","DOIUrl":"10.1109/JETCAS.2024.3386672","url":null,"abstract":"Realizing high-fidelity three-dimensional (3D) scene representation through holography presents a formidable challenge, primarily due to the unknown mechanism of the optimal hologram and huge computational load as well as memory usage. Herein, we propose a Physically Guided Generative Adversarial Network (PGGAN), which is the first generative model to transform the multi-view light field directly to holographic 3D content. PGGAN harmoniously fuses the fidelity of data-driven learning with the rigor of physical optics principles, ensuring a stable reconstruction quality across wide field of view, which is unreachable by current central-view-centric approaches. The proposed framework presents an innovative encoder-generator-discriminator, which is informed by a physical optics model. It benefits from the speed and adaptability of data-driven methods to facilitate rapid learning and effectively transfer to novel scenes, while its physics-based guidance ensures that the generated holograms adhere to holographic standards. A unique, differentiable physical model facilitates end-to-end training, which aligns the generative process with the “holographic space”, thereby improving the quality of the reconstructed light fields. Employing an adaptive loss strategy, PGGAN dynamically adjusts the influence of physical guidance in the initial training stages, later optimizing for reconstruction accuracy. Empirical evaluations reveal PGGAN’s exceptional ability to swiftly generate a detailed hologram in as little as 0.002 seconds, significantly eclipsing current state-of-the-art techniques in speed while maintaining superior angular reconstruction fidelity. These results demonstrate PGGAN’s effectiveness in producing high-quality holograms rapidly from multi-view datasets, advancing real-time holographic rendering significantly.","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"14 2","pages":"286-298"},"PeriodicalIF":3.7,"publicationDate":"2024-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140592787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human–Machine Collaborative Image Compression Method Based on Implicit Neural Representations 基于隐式神经表征的人机协作图像压缩方法
IF 3.7 2区 工程技术
IEEE Journal on Emerging and Selected Topics in Circuits and Systems Pub Date : 2024-04-09 DOI: 10.1109/JETCAS.2024.3386639
Huanyang Li;Xinfeng Zhang
{"title":"Human–Machine Collaborative Image Compression Method Based on Implicit Neural Representations","authors":"Huanyang Li;Xinfeng Zhang","doi":"10.1109/JETCAS.2024.3386639","DOIUrl":"10.1109/JETCAS.2024.3386639","url":null,"abstract":"With the explosive increase in the volume of images intended for analysis by AI, image coding for machine have been proposed to transmit information in a machine-interpretable format, thereby enhancing image compression efficiency. However, such efficient coding schemes often lead to issues like loss of image details and features, and unclear semantic information due to high data compression ratio, making them less suitable for human vision domains. Thus, it is a critical problem to balance image visual quality and machine vision accuracy at a given compression ratio. To address these issues, we introduce a human-machine collaborative image coding framework based on Implicit Neural Representations (INR), which effectively reduces the transmitted information for machine vision tasks at the decoding side while maintaining high-efficiency image compression for human vision against INR compression framework. To enhance the model’s perception of images for machine vision, we design a semantic embedding enhancement module to assist in understanding image semantics. Specifically, we employ the Swin Transformer model to initialize image features, ensuring that the embedding of the compression model are effectively applicable to downstream visual tasks. Extensive experimental results demonstrate that our method significantly outperforms other image compression methods in classification tasks while ensuring image compression efficiency.","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"14 2","pages":"198-208"},"PeriodicalIF":3.7,"publicationDate":"2024-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140593443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FPGA Codec System of Learned Image Compression With Algorithm-Architecture Co-Optimization 算法-架构协同优化的学习图像压缩 FPGA 编解码器系统
IF 3.7 2区 工程技术
IEEE Journal on Emerging and Selected Topics in Circuits and Systems Pub Date : 2024-04-08 DOI: 10.1109/JETCAS.2024.3386328
Heming Sun;Qingyang Yi;Masahiro Fujita
{"title":"FPGA Codec System of Learned Image Compression With Algorithm-Architecture Co-Optimization","authors":"Heming Sun;Qingyang Yi;Masahiro Fujita","doi":"10.1109/JETCAS.2024.3386328","DOIUrl":"10.1109/JETCAS.2024.3386328","url":null,"abstract":"Learned Image Compression (LIC) has shown a coding ability competitive to traditional standards. To address the complexity issue of LIC, various hardware accelerators are required. As one category of accelerators, FPGA has been used because of its good reconfigurability and high power efficiency. However, the prior work developed the algorithm of LIC neural network at first, and then proposed an associated FPGA hardware. This separate manner of algorithm and architecture development can easily cause a layout problem such as routing congestion when the hardware utilization is high. To mitigate this problem, this paper gives an algorithm-architecture co- optimization of LIC. We first restrict the input and output channel parallelism with some constraints to ease the routing issue with more DSP usage. After that, we adjust the numbers of channels to increase the DSP efficiency. As a result, compared with one recent work with a fine-grained pipelined architecture, we can reach up to 1.5x faster throughput with almost the same coding performance on the Kodak dataset. Compared with another recent work accelerated by AMD/Xilinx DPU, we can reach faster throughput with better coding performance.","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"14 2","pages":"334-347"},"PeriodicalIF":3.7,"publicationDate":"2024-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140592693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generative Refinement for Low Bitrate Image Coding Using Vector Quantized Residual 使用矢量量化残差进行低比特率图像编码的生成式改进
IF 3.7 2区 工程技术
IEEE Journal on Emerging and Selected Topics in Circuits and Systems Pub Date : 2024-04-05 DOI: 10.1109/JETCAS.2024.3385653
Yuzhuo Kong;Ming Lu;Zhan Ma
{"title":"Generative Refinement for Low Bitrate Image Coding Using Vector Quantized Residual","authors":"Yuzhuo Kong;Ming Lu;Zhan Ma","doi":"10.1109/JETCAS.2024.3385653","DOIUrl":"10.1109/JETCAS.2024.3385653","url":null,"abstract":"Despite the significant progress in recent deep learning-based image compression, the reconstructed visual quality still suffers at low bitrates due to the lack of high-frequency information. Existing methods deploy the generative adversarial networks (GANs) as an additional loss to supervise the rate-distortion (R-D) optimization, capable of producing more high-frequency components for visually pleasing reconstruction but also introducing unexpected fake textures. This work, instead, proposes to generate high-frequency residuals to refine an image reconstruction compressed using existing image compression solutions. Such a residual signal is calculated between the decoded image and its uncompressed input and quantized to proper codeword vectors in a learnable codebook for decoder-side generative refinement. Extensive experiments demonstrate that our method can restore high-frequency information given images compressed by any codecs and outperform the state-of-the-art generative image compression algorithms or perceptual-oriented post-processing approaches. Moreover, the proposed method using vector quantized residual exhibits remarkable robustness and generalizes to both rules-based and learning-based compression models, which can be used as a plug-and-play module for perceptual optimization without re-training.","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"14 2","pages":"185-197"},"PeriodicalIF":3.7,"publicationDate":"2024-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140592901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PKU-AIGI-500K: A Neural Compression Benchmark and Model for AI-Generated Images PKU-AIGI-500K:人工智能生成图像的神经压缩基准和模型
IF 3.7 2区 工程技术
IEEE Journal on Emerging and Selected Topics in Circuits and Systems Pub Date : 2024-04-05 DOI: 10.1109/JETCAS.2024.3385629
Xunxu Duan;Siwei Ma;Hongbin Liu;Chuanmin Jia
{"title":"PKU-AIGI-500K: A Neural Compression Benchmark and Model for AI-Generated Images","authors":"Xunxu Duan;Siwei Ma;Hongbin Liu;Chuanmin Jia","doi":"10.1109/JETCAS.2024.3385629","DOIUrl":"10.1109/JETCAS.2024.3385629","url":null,"abstract":"In recent years, artificial intelligence-generated content (AIGC) enabled by foundation models has received increasing attention and is undergoing remarkable development. Text prompts can be elegantly translated/converted into high-quality, photo-realistic images. This remarkable feature, however, has introduced extremely high bandwidth requirements for compressing and transmitting the vast number of AI-generated images (AIGI) for such AIGC services. Despite this challenge, research on compression methods for AIGI is conspicuously lacking but undeniably necessary. This research addresses this critical gap by introducing the pioneering AIGI dataset, PKU-AIGI-500K, encompassing over 105k+ diverse prompts and 528k+ images derived from five major foundation models. Through this dataset, we delve into exploring and analyzing the essential characteristics of AIGC images and empirically prove that existing data-driven lossy compression methods achieve sub-optimal or less efficient rate-distortion performance without fine-tuning, primarily due to a domain shift between AIGIs and natural images. We comprehensively benchmark the rate-distortion performance and runtime complexity analysis of conventional and learned image coding solutions that are openly available, uncovering new insights for emerging studies in AIGI compression. Moreover, to harness the full potential of redundant information in AIGI and its corresponding text, we propose an AIGI compression model (Cross-Attention Transformer Codec, CATC) trained on this dataset as a strong baseline. Subsequent experimental results demonstrate that our proposed model achieves up to 30.09% bitrate reduction compared to the state-of-the-art (SOTA) H.266/VVC codec and outperforms the SOTA learned codec, paving the way for future research in AIGI compression.","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"14 2","pages":"172-184"},"PeriodicalIF":3.7,"publicationDate":"2024-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140592692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Survey on Visual Signal Coding and Processing With Generative Models: Technologies, Standards, and Optimization 使用生成模型的视觉信号编码和处理调查:技术、标准和优化
IF 3.7 2区 工程技术
IEEE Journal on Emerging and Selected Topics in Circuits and Systems Pub Date : 2024-03-21 DOI: 10.1109/JETCAS.2024.3403524
Zhibo Chen;Heming Sun;Li Zhang;Fan Zhang
{"title":"Survey on Visual Signal Coding and Processing With Generative Models: Technologies, Standards, and Optimization","authors":"Zhibo Chen;Heming Sun;Li Zhang;Fan Zhang","doi":"10.1109/JETCAS.2024.3403524","DOIUrl":"10.1109/JETCAS.2024.3403524","url":null,"abstract":"This paper provides a survey of the latest developments in visual signal coding and processing with generative models. Specifically, our focus is on presenting the advancement of generative models and their influence on research in the domain of visual signal coding and processing. This survey study begins with a brief introduction of well-established generative models, including the Variational Autoencoder (VAE) models, Generative Adversarial Network (GAN) models, Autoregressive (AR) models, Normalizing Flows and Diffusion models. The subsequent section of the paper explores the advancements in visual signal coding based on generative models, as well as the ongoing international standardization activities. In the realm of visual signal processing, our focus lies on the application and development of various generative models in the research of visual signal restoration. We also present the latest developments in generative visual signal synthesis and editing, along with visual signal quality assessment using generative models and quality assessment for generative models. The practical implementation of these studies is closely linked to the investigation of fast optimization. This paper additionally presents the latest advancements in fast optimization on visual signal coding and processing with generative models. We hope to advance this field by providing researchers and practitioners a comprehensive literature review on the topic of visual signal coding and processing with generative models.","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"14 2","pages":"149-171"},"PeriodicalIF":3.7,"publicationDate":"2024-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141105359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信