IEEE Transactions on Multimedia最新文献

筛选
英文 中文
DIP: Diffusion Learning of Inconsistency Pattern for General DeepFake Detection
IF 8.4 1区 计算机科学
IEEE Transactions on Multimedia Pub Date : 2024-12-25 DOI: 10.1109/TMM.2024.3521766
Fan Nie;Jiangqun Ni;Jian Zhang;Bin Zhang;Weizhe Zhang
{"title":"DIP: Diffusion Learning of Inconsistency Pattern for General DeepFake Detection","authors":"Fan Nie;Jiangqun Ni;Jian Zhang;Bin Zhang;Weizhe Zhang","doi":"10.1109/TMM.2024.3521766","DOIUrl":"https://doi.org/10.1109/TMM.2024.3521766","url":null,"abstract":"With the advancement of deepfake generation techniques, the importance of deepfake detection in protecting multimedia content integrity has become increasingly obvious. Recently, temporal inconsistency clues have been explored to improve the generalizability of deepfake video detection. According to our observation, the temporal artifacts of forged videos in terms of motion information usually exhibits quite distinct inconsistency patterns along horizontal and vertical directions, which could be leveraged to improve the generalizability of detectors. In this paper, a transformer-based framework for <bold>D</b>iffusion Learning of <bold>I</b>nconsistency <bold>P</b>attern (DIP) is proposed, which exploits directional inconsistencies for deepfake video detection. Specifically, DIP begins with a spatiotemporal encoder to represent spatiotemporal information. A directional inconsistency decoder is adopted accordingly, where direction-aware attention and inconsistency diffusion are incorporated to explore potential inconsistency patterns and jointly learn the inherent relationships. In addition, the SpatioTemporal Invariant Loss (STI Loss) is introduced to contrast spatiotemporally augmented sample pairs and prevent the model from overfitting nonessential forgery artifacts. Extensive experiments on several public datasets demonstrate that our method could effectively identify directional forgery clues and achieve state-of-the-art performance.","PeriodicalId":13273,"journal":{"name":"IEEE Transactions on Multimedia","volume":"27 ","pages":"2155-2167"},"PeriodicalIF":8.4,"publicationDate":"2024-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143800839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MENSA: Multi-Dataset Harmonized Pretraining for Semantic Segmentation
IF 8.4 1区 计算机科学
IEEE Transactions on Multimedia Pub Date : 2024-12-25 DOI: 10.1109/TMM.2024.3521851
Bowen Shi;Xiaopeng Zhang;Yaoming Wang;Wenrui Dai;Junni Zou;Hongkai Xiong
{"title":"MENSA: Multi-Dataset Harmonized Pretraining for Semantic Segmentation","authors":"Bowen Shi;Xiaopeng Zhang;Yaoming Wang;Wenrui Dai;Junni Zou;Hongkai Xiong","doi":"10.1109/TMM.2024.3521851","DOIUrl":"https://doi.org/10.1109/TMM.2024.3521851","url":null,"abstract":"Existing pretraining methods for semantic segmentation are hampered by the task gap between global image -level pretraining and local pixel-level finetuning. Joint dense-level pretraining is a promising alternative to exploit off-the-shelf annotations from diverse segmentation datasets but suffers from low-quality class embeddings and inconsistent data and supervision signals across multiple datasets by directly employing CLIP. To overcome these challenges, we propose a novel <underline>M</u>ulti-datas<underline>E</u>t harmo<underline>N</u>ized pretraining framework for <underline>S</u>emantic s<underline>E</u>gmentation (MENSA). MENSA incorporates high-quality language embeddings and momentum-updated visual embeddings to effectively model the class relationships in the embedding space and thereby provide reliable supervision information for each category. To further adapt to multiple datasets, we achieve one-to-many pixel-embedding pairing with cross-dataset multi-label mapping through cross-modal information exchange to mitigate inconsistent supervision signals and introduce region-level and pixel-level cross-dataset mixing for varying data distribution. Experimental results demonstrate that MENSA is a powerful foundation segmentation model that consistently outperforms popular supervised or unsupervised ImageNet pretrained models for various benchmarks under standard fine-tuning. Furthermore, MENSA is shown to significantly benefit frozen-backbone fine-tuning and zero-shot learning by endowing pixel-level distinctiveness to learned representations.","PeriodicalId":13273,"journal":{"name":"IEEE Transactions on Multimedia","volume":"27 ","pages":"2127-2140"},"PeriodicalIF":8.4,"publicationDate":"2024-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143801059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Auxiliary Representation Guided Network for Visible-Infrared Person Re-Identification 可见-红外人再识别辅助表示引导网络
IF 8.4 1区 计算机科学
IEEE Transactions on Multimedia Pub Date : 2024-12-25 DOI: 10.1109/TMM.2024.3521773
Mengzan Qi;Sixian Chan;Chen Hang;Guixu Zhang;Tieyong Zeng;Zhi Li
{"title":"Auxiliary Representation Guided Network for Visible-Infrared Person Re-Identification","authors":"Mengzan Qi;Sixian Chan;Chen Hang;Guixu Zhang;Tieyong Zeng;Zhi Li","doi":"10.1109/TMM.2024.3521773","DOIUrl":"https://doi.org/10.1109/TMM.2024.3521773","url":null,"abstract":"Visible-Infrared Person Re-identification aims to retrieve images of specific identities across modalities. To relieve the large cross-modality discrepancy, researchers introduce the auxiliary modality within the image space to assist modality-invariant representation learning. However, the challenge persists in constraining the inherent quality of generated auxiliary images, further leading to a bottleneck in retrieval performance. In this paper, we propose a novel Auxiliary Representation Guided Network (ARGN) to explore the potential of auxiliary representations, which are directly generated within the modality-shared embedding space. In contrast to the original visible and infrared representations, which contain information solely from their respective modalities, these auxiliary representations integrate cross-modality information by fusing both modalities. In our framework, we utilize these auxiliary representations as modality guidance to reduce the cross-modality discrepancy. First, we propose a High-quality Auxiliary Representation Learning (HARL) framework to generate identity-consistent auxiliary representations. The primary objective of our HARL is to ensure that auxiliary representations capture diverse modality information from both modalities while concurrently preserving identity-related discrimination. Second, guided by these auxiliary representations, we design an Auxiliary Representation Guided Constraint (ARGC) to optimize the modality-shared embedding space. By incorporating this constraint, the modality-shared embedding space is optimized to achieve enhanced intra-identity compactness and inter-identity separability, further improving the retrieval performance. In addition, to improve the robustness of our framework against the modality variation, we introduce a Part-based Adaptive Gaussian Module (PAGM) to adaptively extract discriminative information across modalities. Finally, extensive experiments are conducted to demonstrate the superiority of our method over state-of-the-art approaches on three VI-ReID datasets.","PeriodicalId":13273,"journal":{"name":"IEEE Transactions on Multimedia","volume":"27 ","pages":"340-355"},"PeriodicalIF":8.4,"publicationDate":"2024-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142993832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive Fusion Learning for Compositional Zero-Shot Recognition
IF 8.4 1区 计算机科学
IEEE Transactions on Multimedia Pub Date : 2024-12-25 DOI: 10.1109/TMM.2024.3521852
Lingtong Min;Ziman Fan;Shunzhou Wang;Feiyang Dou;Xin Li;Binglu Wang
{"title":"Adaptive Fusion Learning for Compositional Zero-Shot Recognition","authors":"Lingtong Min;Ziman Fan;Shunzhou Wang;Feiyang Dou;Xin Li;Binglu Wang","doi":"10.1109/TMM.2024.3521852","DOIUrl":"https://doi.org/10.1109/TMM.2024.3521852","url":null,"abstract":"Compositional Zero-Shot Learning (CZSL) aims to learn visual concepts (i.e., attributes and objects) from seen compositions and combine them to predict unseen compositions. Existing visual encoders in CZSL typically use traditional visual encoders (i.e., CNN and Transformer) or image encoders from Visual-Language Models (VLMs) to encode image features. However, traditional visual encoders need more multi-modal textual information, and image encoders of VLMs exhibit dependence on pre-training data, making them less effective when used independently for predicting unseen compositions. To overcome this limitation, we propose a novel approach based on the joint modeling of traditional visual encoders and VLMs visual encoders to enhance the prediction ability for uncommon and unseen compositions. Specifically, we design an adaptive fusion module that automatically adjusts the weighted parameters of similarity scores between traditional and VLMs methods during training, and these weighted parameters are inherited during the inference process. Given the significance of disentangling attributes and objects, we design a Multi-Attribute Object Module that, during the training phase, incorporates multiple pairs of attributes and objects as prior knowledge, leveraging this rich prior knowledge to facilitate the disentanglement of attributes and objects. Building upon this, we select the text encoder from VLMs to construct the Adaptive Fusion Network. We conduct extensive experiments on the Clothing16 K, UT-Zappos50 K, and C-GQA datasets, achieving excellent performance on the Clothing16 K and UT-Zappos50 K datasets.","PeriodicalId":13273,"journal":{"name":"IEEE Transactions on Multimedia","volume":"27 ","pages":"1193-1204"},"PeriodicalIF":8.4,"publicationDate":"2024-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143594320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GCN-Based Multi-Modality Fusion Network for Action Recognition
IF 8.4 1区 计算机科学
IEEE Transactions on Multimedia Pub Date : 2024-12-25 DOI: 10.1109/TMM.2024.3521749
Shaocan Liu;Xingtao Wang;Ruiqin Xiong;Xiaopeng Fan
{"title":"GCN-Based Multi-Modality Fusion Network for Action Recognition","authors":"Shaocan Liu;Xingtao Wang;Ruiqin Xiong;Xiaopeng Fan","doi":"10.1109/TMM.2024.3521749","DOIUrl":"https://doi.org/10.1109/TMM.2024.3521749","url":null,"abstract":"Thanks to the remarkably expressive power for depicting structural data, Graph Convolutional Network (GCN) has been extensively adopted for skeleton-based action recognition in recent years. However, GCN is designed to operate on irregular graphs of skeletons, making it difficult to deal with other modalities represented on regular grids directly. Thus, although existing works have demonstrated the necessity of multi-modality fusion, few methods in the literature explore the fusion of skeleton and other modalities within a GCN architecture. In this paper, we present a novel GCN-based framework, termed GCN-based Multi-modality Fusion Network (GMFNet), to efficiently utilize complementary information in RGB and skeleton data. GMFNet is constructed by connecting a main stream with a GCN-based multi-modality fusion module (GMFM), whose goal is to gradually combine finer and coarse action-related information extracted from skeletons and RGB videos, respectively. Specifically, a cross-modality data mapping method is designed to transform an RGB video into a <inline-formula><tex-math>$mathit{skeleton-like}$</tex-math></inline-formula> (SL) sequence, which is then integrated with the skeleton sequence under a gradual fusion scheme in GMFM. The fusion results are fed into the following main stream to extract more discriminative features and produce the final prediction. In addition, a spatio-temporal joint attention mechanism is introduced for more accurate action recognition. Compared to the multi-stream approaches, GMFNet can be implemented within an end-to-end training pipeline and thereby reduces the training complexity. Experimental results show the proposed GMFNet achieves impressive performance on two large-scale data sets of NTU RGB+D 60 and 120.","PeriodicalId":13273,"journal":{"name":"IEEE Transactions on Multimedia","volume":"27 ","pages":"1242-1253"},"PeriodicalIF":8.4,"publicationDate":"2024-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143594346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DivDiff: A Conditional Diffusion Model for Diverse Human Motion Prediction
IF 8.4 1区 计算机科学
IEEE Transactions on Multimedia Pub Date : 2024-12-25 DOI: 10.1109/TMM.2024.3521821
Hua Yu;Yaqing Hou;Wenbin Pei;Yew-Soon Ong;Qiang Zhang
{"title":"DivDiff: A Conditional Diffusion Model for Diverse Human Motion Prediction","authors":"Hua Yu;Yaqing Hou;Wenbin Pei;Yew-Soon Ong;Qiang Zhang","doi":"10.1109/TMM.2024.3521821","DOIUrl":"https://doi.org/10.1109/TMM.2024.3521821","url":null,"abstract":"Diverse human motion prediction (HMP) aims to predict multiple plausible future motions given an observed human motion sequence. It is a challenging task due to the diversity of potential human motions while ensuring an accurate description of future human motions. Current solutions are either low-diversity or limited in expressiveness. Recent denoising diffusion probabilistic models (DDPM) demonstrate promising performance in various generative tasks. However, introducing DDPM directly into diverse HMP incurs some issues. While DDPM can enhance the diversity of potential human motion patterns, the predicted human motions gradually become implausible over time due to significant noise disturbances in the forward process of DDPM. This phenomenon leads to the predicted human motions being unrealistic, seriously impacting the quality of predicted motions and restricting their practical applicability in real-world scenarios. To alleviate this, we propose a novel conditional diffusion-based generative model, called DivDiff, to predict more diverse and realistic human motions. Specifically, the DivDiff employs DDPM as our backbone and incorporates Discrete Cosine Transform (DCT) and Transformer mechanisms to encode the observed human motion sequence as a condition to instruct the reverse process of DDPM. More importantly, we design a diversified reinforcement sampling function (DRSF) to enforce human skeletal constraints on the predicted human motions. DRSF utilizes the acquired information from human skeletal as prior knowledge, thereby reducing significant disturbances introduced during the forward process. Extensive results received in the experiments on two widely-used datasets (Human3.6M and HumanEva-I) demonstrate that our model obtains competitive performance on both diversity and accuracy.","PeriodicalId":13273,"journal":{"name":"IEEE Transactions on Multimedia","volume":"27 ","pages":"1848-1859"},"PeriodicalIF":8.4,"publicationDate":"2024-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143801055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spotlight Text Detector: Spotlight on Candidate Regions Like a Camera
IF 8.4 1区 计算机科学
IEEE Transactions on Multimedia Pub Date : 2024-12-25 DOI: 10.1109/TMM.2024.3521824
Xu Han;Junyu Gao;Chuang Yang;Yuan Yuan;Qi Wang
{"title":"Spotlight Text Detector: Spotlight on Candidate Regions Like a Camera","authors":"Xu Han;Junyu Gao;Chuang Yang;Yuan Yuan;Qi Wang","doi":"10.1109/TMM.2024.3521824","DOIUrl":"https://doi.org/10.1109/TMM.2024.3521824","url":null,"abstract":"The irregular contour representation is one of the tough challenges in scene text detection. Although segmentation-based methods have achieved significant progress with the help of flexible pixel prediction, the overlap of geographically close texts hinders detecting them separately. To alleviate this problem, some shrink-based methods predict text kernels and expand them to restructure texts. However, the text kernel is an artificial object with incomplete semantic features that are prone to incorrect or missing detection. In addition, different from the general objects, the geometry features (aspect ratio, scale, and shape) of scene texts vary significantly, which makes it difficult to detect them accurately. To consider the above problems, we propose an effective spotlight text detector (STD), which consists of a spotlight calibration module (SCM) and a multivariate information extraction module (MIEM). The former concentrates efforts on the candidate kernel, like a camera focus on the target. It obtains candidate features through a mapping filter and calibrates them precisely to eliminate some false positive samples. The latter designs different shape schemes to explore multiple geometric features for scene texts. It helps extract various spatial relationships to improve the model's ability to recognize kernel regions. Ablation studies prove the effectiveness of the designed SCM and MIEM. Extensive experiments verify that our STD is superior to existing state-of-the-art methods on various datasets, including ICDAR2015, CTW1500, MSRA-TD500, and Total-Text.","PeriodicalId":13273,"journal":{"name":"IEEE Transactions on Multimedia","volume":"27 ","pages":"1937-1949"},"PeriodicalIF":8.4,"publicationDate":"2024-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143801056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hard-Sample Style Guided Patch Attack With RL-Enhanced Motion Pattern for Video Recognition
IF 8.4 1区 计算机科学
IEEE Transactions on Multimedia Pub Date : 2024-12-25 DOI: 10.1109/TMM.2024.3521832
Jian Yang;Jun Li;Yunong Cai;Guoming Wu;Zhiping Shi;Chaodong Tan;Xianglong Liu
{"title":"Hard-Sample Style Guided Patch Attack With RL-Enhanced Motion Pattern for Video Recognition","authors":"Jian Yang;Jun Li;Yunong Cai;Guoming Wu;Zhiping Shi;Chaodong Tan;Xianglong Liu","doi":"10.1109/TMM.2024.3521832","DOIUrl":"https://doi.org/10.1109/TMM.2024.3521832","url":null,"abstract":"Adversarial attacks have been extensively studied in the image field. In recent years, research has shown that video recognition models are also vulnerable to adversarial examples. However, most studies about adversarial attacks for video models have focused on perturbation-based methods, while patch-based black-box attacks have received less attention. Despite the excellent performance of perturbation-based attacks, these attacks are impractical for real-world implementation. Most existing patch-based black-box attacks require occluding larger areas and performing more queries to the target model. In this paper, we propose a hard-sample style guided patch attack with reinforcement learning (RL) enhanced motion patterns for video recognition (HSPA). Specifically, we utilize the style features of video hard samples and transfer their multi-dimensional style features to images to obtain a texture patch set. Then we use reinforcement learning to locate the patch coordinates and obtain a specific adversarial motion pattern of the patch to successfully perform an effective attack on a video recognition model in both the spatial and temporal dimensions. Our experiments on three widely-used video action recognition models (C3D, LRCN, and TDN) and two mainstream datasets (UCF-101 and HMDB-51) demonstrate the superior performance of our method compared to other state-of-the-art approaches.","PeriodicalId":13273,"journal":{"name":"IEEE Transactions on Multimedia","volume":"27 ","pages":"1205-1215"},"PeriodicalIF":8.4,"publicationDate":"2024-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143594425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VLAB: Enhancing Video Language Pretraining by Feature Adapting and Blending
IF 8.4 1区 计算机科学
IEEE Transactions on Multimedia Pub Date : 2024-12-25 DOI: 10.1109/TMM.2024.3521729
Xingjian He;Sihan Chen;Fan Ma;Zhicheng Huang;Xiaojie Jin;Zikang Liu;Dongmei Fu;Yi Yang;Jing Liu;Jiashi Feng
{"title":"VLAB: Enhancing Video Language Pretraining by Feature Adapting and Blending","authors":"Xingjian He;Sihan Chen;Fan Ma;Zhicheng Huang;Xiaojie Jin;Zikang Liu;Dongmei Fu;Yi Yang;Jing Liu;Jiashi Feng","doi":"10.1109/TMM.2024.3521729","DOIUrl":"https://doi.org/10.1109/TMM.2024.3521729","url":null,"abstract":"Large-scale image-text contrastive pre-training models, such as CLIP, have been demonstrated to effectively learn high-quality multimodal representations. However, there is limited research on learning video-text representations for general video multimodal tasks based on these powerful features. Towards this goal, we propose a novel video-text pre-training method dubbed VLAB: <bold>V</b>ideo <bold>L</b>anguage pre-training by feature <bold>A</b>dapting and <bold>B</b>lending, which transfers CLIP representations to video pre-training tasks and develops unified video multimodal models for a wide range of video-text tasks. Specifically, VLAB is founded on two key strategies: feature adapting and feature blending. In the former, we introduce a new video adapter module to address CLIP's deficiency in modeling temporal information and extend the model's capability to encompass both contrastive and generative tasks. In the latter, we propose an end-to-end training method that further enhances the model's performance by exploiting the complementarity of image and video features. We validate the effectiveness and versatility of VLAB through extensive experiments on highly competitive video multimodal tasks, including video text retrieval, video captioning, and video question answering. Remarkably, VLAB outperforms competing methods significantly and sets new records in video question answering on MSRVTT, MSVD, and TGIF datasets. It achieves an accuracy of 49.6, 60.9, and 79.0, respectively.","PeriodicalId":13273,"journal":{"name":"IEEE Transactions on Multimedia","volume":"27 ","pages":"2168-2180"},"PeriodicalIF":8.4,"publicationDate":"2024-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143800841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VB-KGN: Variational Bayesian Kernel Generation Networks for Motion Image Deblurring
IF 8.4 1区 计算机科学
IEEE Transactions on Multimedia Pub Date : 2024-12-25 DOI: 10.1109/TMM.2024.3521805
Ying Fu;Xinyu Zhu;Xiaojie Li;Xin Wang;Xi Wu;Shu Hu;Yi Wu;Siwei Lyu;Wei Liu
{"title":"VB-KGN: Variational Bayesian Kernel Generation Networks for Motion Image Deblurring","authors":"Ying Fu;Xinyu Zhu;Xiaojie Li;Xin Wang;Xi Wu;Shu Hu;Yi Wu;Siwei Lyu;Wei Liu","doi":"10.1109/TMM.2024.3521805","DOIUrl":"https://doi.org/10.1109/TMM.2024.3521805","url":null,"abstract":"Motion blur estimation is a critical and fundamental task in scene analysis and image restoration. While most state-of-the-art deep learning-based methods for single-image motion image deblurring focus on constructing deep networks or developing training strategies, the characterization of motion blur has received less attention. In this paper, we innovatively propose a non-parametric Variational Bayesian Kernel Generation Network (VB-KGN) for characterizing motion blur in a single image. To solve this model, we employ the variational inference framework to approximate the expected statistical distribution of motion blur images in a data-driven manner. The qualitative and quantitative evaluations of our experimental results demonstrate that our proposed model can generate highly accurate motion blur kernels, significantly improving motion image deblurring performance and substantially reducing the need for extensive training sample preprocessing for deblurring tasks.","PeriodicalId":13273,"journal":{"name":"IEEE Transactions on Multimedia","volume":"27 ","pages":"2028-2042"},"PeriodicalIF":8.4,"publicationDate":"2024-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143800876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信