IEEE Transactions on Image Processing最新文献

筛选
英文 中文
Toward Better Than Pseudo-Reference in Underwater Image Enhancement. 迈向比伪参考更好的水下图像增强。
IF 10.6 1区 计算机科学
IEEE Transactions on Image Processing Pub Date : 2025-09-23 DOI: 10.1109/tip.2025.3611138
Yi Liu,Qiuping Jiang,Xingbo Li,Ting Luo,Wenqi Ren
{"title":"Toward Better Than Pseudo-Reference in Underwater Image Enhancement.","authors":"Yi Liu,Qiuping Jiang,Xingbo Li,Ting Luo,Wenqi Ren","doi":"10.1109/tip.2025.3611138","DOIUrl":"https://doi.org/10.1109/tip.2025.3611138","url":null,"abstract":"Since degraded underwater images are not always accompanied with distortion-free counterparts in real-world situations, existing underwater image enhancement (UIE) methods are mostly learned on a paired set consisting of raw underwater images and their corresponding pseudo-reference labels. Although the existing UIE datasets manually select the best model-generated results as pseudo-references, such pseudo-reference labels do not always exhibit perfect visual quality. Therefore, it would be interesting to investigate whether it is possible to break through the performance bottleneck of UIE networks trained with imperfect pseudo-references. Motivated by these facts, this paper focuses on innovating more advanced loss functions rather than designing more complex network architectures. Specifically, a plug-and-play hybrid Performance SurPassing Loss (PSPL), consisting of a Quality Score Comparison Loss (QSCL) and a scene Depth-aware Unpaired Contrastive Loss (DUCL), is formulated to guide the training of UIE network. Functionally, QSCL aims to guide the UIE network to generate enhanced results with better visual quality than pseudo-references by constructing image quality score comparison losses from both image-level and region-level. Nevertheless, only using QSCL cannot guarantee obtaining desired results for those severely degraded distant regions. Therefore, we also design a tailored DUCL to handle this challenging issue from the scene depth perspective, i.e., DUCL encourages the distant regions of the enhanced results to be closer to the high-quality nearby regions (pull) and far away from the low-quality distant regions (push) of the pseudo-references. Extensive experimental results demonstrate the advantage of using PSPL over the state-of-the-arts even with an extremely simple and lightweight UIE network. The source code will be released at https://github.com/lewis081/PSPL.","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"58 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145127192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PCE-GAN: A Generative Adversarial Network for Point Cloud Attribute Quality Enhancement based on Optimal Transport PCE-GAN:基于最优传输的点云属性质量增强生成对抗网络
IF 10.6 1区 计算机科学
IEEE Transactions on Image Processing Pub Date : 2025-09-23 DOI: 10.1109/tip.2025.3611178
Tian Guo, Hui Yuan, Qi Liu, Honglei Su, Raouf Hamzaoui, Sam Kwong
{"title":"PCE-GAN: A Generative Adversarial Network for Point Cloud Attribute Quality Enhancement based on Optimal Transport","authors":"Tian Guo, Hui Yuan, Qi Liu, Honglei Su, Raouf Hamzaoui, Sam Kwong","doi":"10.1109/tip.2025.3611178","DOIUrl":"https://doi.org/10.1109/tip.2025.3611178","url":null,"abstract":"","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"13 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145127832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multiscale Segmentation-Guided Fusion Network for Hyperspectral Image Classification. 基于多尺度分割的高光谱图像分类融合网络。
IF 10.6 1区 计算机科学
IEEE Transactions on Image Processing Pub Date : 2025-09-23 DOI: 10.1109/tip.2025.3611146
Hongmin Gao,Runhua Sheng,Yuanchao Su,Zhonghao Chen,Shufang Xu,Lianru Gao
{"title":"Multiscale Segmentation-Guided Fusion Network for Hyperspectral Image Classification.","authors":"Hongmin Gao,Runhua Sheng,Yuanchao Su,Zhonghao Chen,Shufang Xu,Lianru Gao","doi":"10.1109/tip.2025.3611146","DOIUrl":"https://doi.org/10.1109/tip.2025.3611146","url":null,"abstract":"Convolution Neural Networks (CNNs) have demonstrated strong feature extraction capabilities in Euclidean spaces, achieving remarkable success in hyperspectral image (HSI) classification tasks. Meanwhile, Graph convolution networks (GCNs) effectively capture spatial-contextual characteristics by leveraging correlations in non-Euclidean spaces, uncovering hidden relationships to enhance the performance of HSI classification (HSIC). Methods combining GCNs with CNNs have achieved excellent results. However, existing GCN methods primarily rely on single-scale graph structures, limiting their ability to extract features across different spatial ranges. To address this issue, this paper proposes a multiscale segmentation-guided fusion network (MS2FN) for HSIC. This method constructs pixel-level graph structures based on multiscale segmentation data, enabling the GCN to extract features across various spatial ranges. Moreover, effectively utilizing features extracted from different spatial scales is crucial for improving classification performance. This paper adopts distinct processing strategies for different feature types to enhance feature representation. Comparative experiments demonstrate that the proposed method outperforms several state-of-the-art (SOTA) approaches in accuracy. The source code will be released at https://github.com/shengrunhua/MS2FN.","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"35 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145127191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HOVER: Hyperbolic Video-Text Retrieval HOVER:双曲视频文本检索
IF 10.6 1区 计算机科学
IEEE Transactions on Image Processing Pub Date : 2025-09-23 DOI: 10.1109/tip.2025.3611174
Jun Wen, Yufeng Chen, Ruiqi Shi, Wei Ji, Menglin Yang, Difei Gao, Junsong Yuan, Roger Zimmermann
{"title":"HOVER: Hyperbolic Video-Text Retrieval","authors":"Jun Wen, Yufeng Chen, Ruiqi Shi, Wei Ji, Menglin Yang, Difei Gao, Junsong Yuan, Roger Zimmermann","doi":"10.1109/tip.2025.3611174","DOIUrl":"https://doi.org/10.1109/tip.2025.3611174","url":null,"abstract":"","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"40 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145127830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
No-Reference Image Quality Assessment Leveraging GenAI Images 利用GenAI图像的无参考图像质量评估
IF 10.6 1区 计算机科学
IEEE Transactions on Image Processing Pub Date : 2025-09-22 DOI: 10.1109/tip.2025.3610238
Qingbing Sang, Qian Li, Lixiong Liu, Zhaohong Deng, Xiaojun Wu, Alan C. Bovik
{"title":"No-Reference Image Quality Assessment Leveraging GenAI Images","authors":"Qingbing Sang, Qian Li, Lixiong Liu, Zhaohong Deng, Xiaojun Wu, Alan C. Bovik","doi":"10.1109/tip.2025.3610238","DOIUrl":"https://doi.org/10.1109/tip.2025.3610238","url":null,"abstract":"","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"37 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145116184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Privacy-Preserving Visual Localization with Event Cameras 事件相机的隐私保护视觉定位
IF 10.6 1区 计算机科学
IEEE Transactions on Image Processing Pub Date : 2025-09-22 DOI: 10.1109/tip.2025.3607640
Junho Kim, Young Min Kim, Ramzi Zahreddine, Weston A. Welge, Gurunandan Krishnan, Sizhuo Ma, Jian Wang
{"title":"Privacy-Preserving Visual Localization with Event Cameras","authors":"Junho Kim, Young Min Kim, Ramzi Zahreddine, Weston A. Welge, Gurunandan Krishnan, Sizhuo Ma, Jian Wang","doi":"10.1109/tip.2025.3607640","DOIUrl":"https://doi.org/10.1109/tip.2025.3607640","url":null,"abstract":"","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"39 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145116183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SRS: Siamese Reconstruction-Segmentation Network based on Dynamic-Parameter Convolution SRS:基于动态参数卷积的Siamese重建分割网络
IF 10.6 1区 计算机科学
IEEE Transactions on Image Processing Pub Date : 2025-09-19 DOI: 10.1109/tip.2025.3607624
Bingkun Nian, Fenghe Tang, Jianrui Ding, Jie Yang, Zhonglong Zheng, Shaohua Kevin Zhou, Wei Liu
{"title":"SRS: Siamese Reconstruction-Segmentation Network based on Dynamic-Parameter Convolution","authors":"Bingkun Nian, Fenghe Tang, Jianrui Ding, Jie Yang, Zhonglong Zheng, Shaohua Kevin Zhou, Wei Liu","doi":"10.1109/tip.2025.3607624","DOIUrl":"https://doi.org/10.1109/tip.2025.3607624","url":null,"abstract":"","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"38 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145089107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Gradient and Structure Consistency in Multimodal Emotion Recognition. 多模态情感识别中的梯度和结构一致性。
IF 10.6 1区 计算机科学
IEEE Transactions on Image Processing Pub Date : 2025-09-18 DOI: 10.1109/tip.2025.3608664
QingHongYa Shi,Mang Ye,Wenke Huang,Bo Du,Xiaofen Zong
{"title":"Gradient and Structure Consistency in Multimodal Emotion Recognition.","authors":"QingHongYa Shi,Mang Ye,Wenke Huang,Bo Du,Xiaofen Zong","doi":"10.1109/tip.2025.3608664","DOIUrl":"https://doi.org/10.1109/tip.2025.3608664","url":null,"abstract":"Multimodal emotion recognition is a task that integrates text, visual, and audio data to holistically infer an individual's emotional state. Existing research predominantly focuses on exploiting modality-specific cues for joint learning, often ignoring the differences between multiple modalities under common goal learning. Due to multimodal heterogeneity, common goal learning inadvertently introduces optimization biases and interaction noise. To address above challenges, we propose a novel approach named Gradient and Structure Consistency (GSCon). Our strategy operates at both overall and individual levels to consider balance optimization and effective interaction respectively. At the overall level, to avoid the optimization suppression of a modality on other modalities, we construct a balanced gradient direction that aligns each modality's optimization direction, ensuring unbiased convergence. Simultaneously, at the individual level, to avoid the interaction noise caused by multimodal alignment, we align the spatial structure of samples in different modalities. The spatial structure of the samples will not differ due to modal heterogeneity, achieving effective inter-modal interaction. Extensive experiments on multimodal emotion recognition and multimodal intention understanding datasets demonstrate the effectiveness of the proposed method. Code is available at https://github.com/ShiQingHongYa/GSCon.","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"6 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145083514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semantic-Driven Global-Local Fusion Transformer for Image Super-Resolution. 面向图像超分辨率的语义驱动全局-局部融合变压器。
IF 10.6 1区 计算机科学
IEEE Transactions on Image Processing Pub Date : 2025-09-18 DOI: 10.1109/tip.2025.3609106
Kaibing Zhang,Zhouwei Cheng,Xin He,Jie Li,Xinbo Gao
{"title":"Semantic-Driven Global-Local Fusion Transformer for Image Super-Resolution.","authors":"Kaibing Zhang,Zhouwei Cheng,Xin He,Jie Li,Xinbo Gao","doi":"10.1109/tip.2025.3609106","DOIUrl":"https://doi.org/10.1109/tip.2025.3609106","url":null,"abstract":"Image Super-Resolution (SR) has seen remarkable progress with the emergence of transformer-based architectures. However, due to the high computational cost, many existing transformer-based SR methods limit their attention to local windows, which hinders their ability to model long-range dependencies and global structures. To address these challenges, we propose a novel SR framework named Semantic-Driven Global-Local Fusion Transformer (SGLFT). The proposed model enhances the receptive field by combining a Hybrid Window Transformer (HWT) and a Scalable Transformer Module (STM) to jointly capture local textures and global context. To further strengthen the semantic consistency of reconstruction, we introduce a Semantic Extraction Module (SEM) that distills high-level semantic priors from the input. These semantic cues are adaptively integrated with visual features through an Adaptive Feature Fusion Semantic Integration Module (AFFSIM). Extensive experiments on standard benchmarks demonstrate the effectiveness of SGLFT in producing visually faithful and structurally consistent SR results. The code will be available at https://github.com/kbzhang0505/SGLFT.","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"22 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145083520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
NanoHTNet: Nano Human Topology Network for Efficient 3D Human Pose Estimation NanoHTNet:用于高效三维人体姿态估计的纳米人体拓扑网络
IF 10.6 1区 计算机科学
IEEE Transactions on Image Processing Pub Date : 2025-09-17 DOI: 10.1109/tip.2025.3608662
Jialun Cai, Mengyuan Liu, Hong Liu, Shuheng Zhou, Wenhao Li
{"title":"NanoHTNet: Nano Human Topology Network for Efficient 3D Human Pose Estimation","authors":"Jialun Cai, Mengyuan Liu, Hong Liu, Shuheng Zhou, Wenhao Li","doi":"10.1109/tip.2025.3608662","DOIUrl":"https://doi.org/10.1109/tip.2025.3608662","url":null,"abstract":"","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"1 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145077461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信