IEEE Transactions on Image Processing最新文献

筛选
英文 中文
Perceive-IR: Learning to Perceive Degradation Better for All-in-One Image Restoration 感知-红外:学习感知退化更好的一体化图像恢复
IF 10.6 1区 计算机科学
IEEE Transactions on Image Processing Pub Date : 2025-05-09 DOI: 10.1109/tip.2025.3566300
Xu Zhang, Jiaqi Ma, Guoli Wang, Qian Zhang, Huan Zhang, Lefei Zhang
{"title":"Perceive-IR: Learning to Perceive Degradation Better for All-in-One Image Restoration","authors":"Xu Zhang, Jiaqi Ma, Guoli Wang, Qian Zhang, Huan Zhang, Lefei Zhang","doi":"10.1109/tip.2025.3566300","DOIUrl":"https://doi.org/10.1109/tip.2025.3566300","url":null,"abstract":"","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"32 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143930899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unrolling Plug-and-Play Gradient Graph Laplacian Regularizer for Image Restoration 展开即插即用梯度图拉普拉斯正则化图像恢复
IF 10.6 1区 计算机科学
IEEE Transactions on Image Processing Pub Date : 2025-04-24 DOI: 10.1109/tip.2025.3562425
Jianghe Cai, Gene Cheung, Fei Chen
{"title":"Unrolling Plug-and-Play Gradient Graph Laplacian Regularizer for Image Restoration","authors":"Jianghe Cai, Gene Cheung, Fei Chen","doi":"10.1109/tip.2025.3562425","DOIUrl":"https://doi.org/10.1109/tip.2025.3562425","url":null,"abstract":"","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"37 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143873084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Combining Pre- and Post-Demosaicking Noise Removal for RAW Video 结合Pre- and - post - demosaked Noise Removal for RAW视频
IF 10.6 1区 计算机科学
IEEE Transactions on Image Processing Pub Date : 2025-01-15 DOI: 10.1109/tip.2025.3527886
M. Sánchez-Beeckman, A. Buades, N. Brandonisio, B. Kanoun
{"title":"Combining Pre- and Post-Demosaicking Noise Removal for RAW Video","authors":"M. Sánchez-Beeckman, A. Buades, N. Brandonisio, B. Kanoun","doi":"10.1109/tip.2025.3527886","DOIUrl":"https://doi.org/10.1109/tip.2025.3527886","url":null,"abstract":"","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"23 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142986397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Harnessing Multi-modal Large Language Models for Measuring and Interpreting Color Differences 利用多模态大语言模型测量和解释色差
IF 10.6 1区 计算机科学
IEEE Transactions on Image Processing Pub Date : 2025-01-01 DOI: 10.1109/tip.2024.3522802
Zhihua Wang, Yu Long, Qiuping Jiang, Chao Huang, Xiaochun Cao
{"title":"Harnessing Multi-modal Large Language Models for Measuring and Interpreting Color Differences","authors":"Zhihua Wang, Yu Long, Qiuping Jiang, Chao Huang, Xiaochun Cao","doi":"10.1109/tip.2024.3522802","DOIUrl":"https://doi.org/10.1109/tip.2024.3522802","url":null,"abstract":"","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"34 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142911627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Field-of-View IoU for Object Detection in 360° Images. 用于 360° 图像中物体检测的视场 IoU。
IF 10.6 1区 计算机科学
IEEE Transactions on Image Processing Pub Date : 2023-07-21 DOI: 10.1109/TIP.2023.3296013
Miao Cao, Satoshi Ikehata, Kiyoharu Aizawa
{"title":"Field-of-View IoU for Object Detection in 360° Images.","authors":"Miao Cao, Satoshi Ikehata, Kiyoharu Aizawa","doi":"10.1109/TIP.2023.3296013","DOIUrl":"10.1109/TIP.2023.3296013","url":null,"abstract":"<p><p>360° cameras have gained popularity over the last few years. In this paper, we propose two fundamental techniques-Field-of-View IoU (FoV-IoU) and 360Augmentation for object detection in 360° images. Although most object detection neural networks designed for perspective images are applicable to 360° images in equirectangular projection (ERP) format, their performance deteriorates owing to the distortion in ERP images. Our method can be readily integrated with existing perspective object detectors and significantly improves the performance. The FoV-IoU computes the intersection-over-union of two Field-of-View bounding boxes in a spherical image which could be used for training, inference, and evaluation while 360Augmentation is a data augmentation technique specific to 360° object detection task which randomly rotates a spherical image and solves the bias due to the sphere-to-plane projection. We conduct extensive experiments on the 360° indoor dataset with different types of perspective object detectors and show the consistent effectiveness of our method.</p>","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"PP ","pages":""},"PeriodicalIF":10.6,"publicationDate":"2023-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9848778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TGFuse: An Infrared and Visible Image Fusion Approach Based on Transformer and Generative Adversarial Network. TGFuse:基于变换器和生成对抗网络的红外与可见光图像融合方法。
IF 10.6 1区 计算机科学
IEEE Transactions on Image Processing Pub Date : 2023-05-10 DOI: 10.1109/TIP.2023.3273451
Dongyu Rao, Tianyang Xu, Xiao-Jun Wu
{"title":"TGFuse: An Infrared and Visible Image Fusion Approach Based on Transformer and Generative Adversarial Network.","authors":"Dongyu Rao, Tianyang Xu, Xiao-Jun Wu","doi":"10.1109/TIP.2023.3273451","DOIUrl":"10.1109/TIP.2023.3273451","url":null,"abstract":"<p><p>The end-to-end image fusion framework has achieved promising performance, with dedicated convolutional networks aggregating the multi-modal local appearance. However, long-range dependencies are directly neglected in existing CNN fusion approaches, impeding balancing the entire image-level perception for complex scenario fusion. In this paper, therefore, we propose an infrared and visible image fusion algorithm based on the transformer module and adversarial learning. Inspired by the global interaction power, we use the transformer technique to learn the effective global fusion relations. In particular, shallow features extracted by CNN are interacted in the proposed transformer fusion module to refine the fusion relationship within the spatial scope and across channels simultaneously. Besides, adversarial learning is designed in the training process to improve the output discrimination via imposing competitive consistency from the inputs, reflecting the specific characteristics in infrared and visible images. The experimental performance demonstrates the effectiveness of the proposed modules, with superior improvement against the state-of-the-art, generalising a novel paradigm via transformer and adversarial learning in the fusion task.</p>","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"PP ","pages":""},"PeriodicalIF":10.6,"publicationDate":"2023-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9443051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rain Removal From Light Field Images With 4D Convolution and Multi-Scale Gaussian Process 基于4D卷积和多尺度高斯过程的光场图像去雨
IF 10.6 1区 计算机科学
IEEE Transactions on Image Processing Pub Date : 2022-08-16 DOI: 10.1109/TAP.2022.3218759
Zhiqiang Yuan, Jianhua Zhang, Yilin Ji, G. Pedersen, W. Fan
{"title":"Rain Removal From Light Field Images With 4D Convolution and Multi-Scale Gaussian Process","authors":"Zhiqiang Yuan, Jianhua Zhang, Yilin Ji, G. Pedersen, W. Fan","doi":"10.1109/TAP.2022.3218759","DOIUrl":"https://doi.org/10.1109/TAP.2022.3218759","url":null,"abstract":"Existing deraining methods focus mainly on a single input image. However, with just a single input image, it is extremely difficult to accurately detect and remove rain streaks, in order to restore a rain-free image. In contrast, a light field image (LFI) embeds abundant 3D structure and texture information of the target scene by recording the direction and position of each incident ray via a plenoptic camera. LFIs are becoming popular in the computer vision and graphics communities. However, making full use of the abundant information available from LFIs, such as 2D array of sub-views and the disparity map of each sub-view, for effective rain removal is still a challenging problem. In this paper, we propose a novel method, 4D-MGP-SRRNet, for rain streak removal from LFIs. Our method takes as input all sub-views of a rainy LFI. To make full use of the LFI, it adopts 4D convolutional layers to simultaneously process all sub-views of the LFI. In the pipeline, the rain detection network, MGPDNet, with a novel Multi-scale Self-guided Gaussian Process (MSGP) module is proposed to detect high-resolution rain streaks from all sub-views of the input LFI at multi-scales. Semi-supervised learning is introduced for MSGP to accurately detect rain streaks by training on both virtual-world rainy LFIs and real-world rainy LFIs at multi-scales via computing pseudo ground truths for real-world rain streaks. We then feed all sub-views subtracting the predicted rain streaks into a 4D convolution-based Depth Estimation Residual Network (DERNet) to estimate the depth maps, which are later converted into fog maps. Finally, all sub-views concatenated with the corresponding rain streaks and fog maps are fed into a powerful rainy LFI restoring model based on the adversarial recurrent neural network to progressively eliminate rain streaks and recover the rain-free LFI. Extensive quantitative and qualitative evaluations conducted on both synthetic LFIs and real-world LFIs demonstrate the effectiveness of our proposed method.","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"32 1","pages":"921-936"},"PeriodicalIF":10.6,"publicationDate":"2022-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48830864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Designing an Illumination-Aware Network for Deep Image Relighting 设计一种用于深度图像重照明的照明感知网络
IF 10.6 1区 计算机科学
IEEE Transactions on Image Processing Pub Date : 2022-07-21 DOI: 10.48550/arXiv.2207.10582
Zuo-Liang Zhu, Z. Li, Ruimao Zhang, Chunle Guo, Ming-Ming Cheng
{"title":"Designing an Illumination-Aware Network for Deep Image Relighting","authors":"Zuo-Liang Zhu, Z. Li, Ruimao Zhang, Chunle Guo, Ming-Ming Cheng","doi":"10.48550/arXiv.2207.10582","DOIUrl":"https://doi.org/10.48550/arXiv.2207.10582","url":null,"abstract":"Lighting is a determining factor in photography that affects the style, expression of emotion, and even quality of images. Creating or finding satisfying lighting conditions, in reality, is laborious and time-consuming, so it is of great value to develop a technology to manipulate illumination in an image as post-processing. Although previous works have explored techniques based on the physical viewpoint for relighting images, extensive supervisions and prior knowledge are necessary to generate reasonable images, restricting the generalization ability of these works. In contrast, we take the viewpoint of image-to-image translation and implicitly merge ideas of the conventional physical viewpoint. In this paper, we present an Illumination-Aware Network (IAN) which follows the guidance from hierarchical sampling to progressively relight a scene from a single image with high efficiency. In addition, an Illumination-Aware Residual Block (IARB) is designed to approximate the physical rendering process and to extract precise descriptors of light sources for further manipulations. We also introduce a depth-guided geometry encoder for acquiring valuable geometry- and structure-related representations once the depth information is available. Experimental results show that our proposed method produces better quantitative and qualitative relighting results than previous state-of-the-art methods. The code and models are publicly available on https://github.com/NK-CS-ZZL/IAN.","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"31 1","pages":"5396-5411"},"PeriodicalIF":10.6,"publicationDate":"2022-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49347222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Content-Aware Scalable Deep Compressed Sensing 内容感知可扩展深度压缩传感
IF 10.6 1区 计算机科学
IEEE Transactions on Image Processing Pub Date : 2022-07-19 DOI: 10.48550/arXiv.2207.09313
Bin Chen, Jian Zhang
{"title":"Content-Aware Scalable Deep Compressed Sensing","authors":"Bin Chen, Jian Zhang","doi":"10.48550/arXiv.2207.09313","DOIUrl":"https://doi.org/10.48550/arXiv.2207.09313","url":null,"abstract":"To more efficiently address image compressed sensing (CS) problems, we present a novel content-aware scalable network dubbed CASNet which collectively achieves adaptive sampling rate allocation, fine granular scalability and high-quality reconstruction. We first adopt a data-driven saliency detector to evaluate the importance of different image regions and propose a saliency-based block ratio aggregation (BRA) strategy for sampling rate allocation. A unified learnable generating matrix is then developed to produce sampling matrix of any CS ratio with an ordered structure. Being equipped with the optimization-inspired recovery subnet guided by saliency information and a multi-block training scheme preventing blocking artifacts, CASNet jointly reconstructs the image blocks sampled at various sampling rates with one single model. To accelerate training convergence and improve network robustness, we propose an SVD-based initialization scheme and a random transformation enhancement (RTE) strategy, which are extensible without introducing extra parameters. All the CASNet components can be combined and learned end-to-end. We further provide a four-stage implementation for evaluation and practical deployments. Experiments demonstrate that CASNet outperforms other CS networks by a large margin, validating the collaboration and mutual supports among its components and strategies. Codes are available at https://github.com/Guaishou74851/CASNet.","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"31 1","pages":"5412-5426"},"PeriodicalIF":10.6,"publicationDate":"2022-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45018690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Unsupervised High-Resolution Portrait Gaze Correction and Animation 无监督的高分辨率肖像凝视校正和动画
IF 10.6 1区 计算机科学
IEEE Transactions on Image Processing Pub Date : 2022-07-01 DOI: 10.48550/arXiv.2207.00256
Jichao Zhang, Jingjing Chen, Hao Tang, E. Sangineto, Peng Wu, Yan Yan, N. Sebe, Wei Wang
{"title":"Unsupervised High-Resolution Portrait Gaze Correction and Animation","authors":"Jichao Zhang, Jingjing Chen, Hao Tang, E. Sangineto, Peng Wu, Yan Yan, N. Sebe, Wei Wang","doi":"10.48550/arXiv.2207.00256","DOIUrl":"https://doi.org/10.48550/arXiv.2207.00256","url":null,"abstract":"This paper proposes a gaze correction and animation method for high-resolution, unconstrained portrait images, which can be trained without the gaze angle and the head pose annotations. Common gaze-correction methods usually require annotating training data with precise gaze, and head pose information. Solving this problem using an unsupervised method remains an open problem, especially for high-resolution face images in the wild, which are not easy to annotate with gaze and head pose labels. To address this issue, we first create two new portrait datasets: CelebGaze ( $256 times 256$ ) and high-resolution CelebHQGaze ( $512 times 512$ ). Second, we formulate the gaze correction task as an image inpainting problem, addressed using a Gaze Correction Module (GCM) and a Gaze Animation Module (GAM). Moreover, we propose an unsupervised training strategy, i.e., Synthesis-As-Training, to learn the correlation between the eye region features and the gaze angle. As a result, we can use the learned latent space for gaze animation with semantic interpolation in this space. Moreover, to alleviate both the memory and the computational costs in the training and the inference stage, we propose a Coarse-to-Fine Module (CFM) integrated with GCM and GAM. Extensive experiments validate the effectiveness of our method for both the gaze correction and the gaze animation tasks in both low and high-resolution face datasets in the wild and demonstrate the superiority of our method with respect to the state of the art.","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"31 1","pages":"5272-5286"},"PeriodicalIF":10.6,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47912929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信