Signal Processing-Image Communication最新文献

筛选
英文 中文
Rethinking erasing strategy on weakly supervised object localization 再思考弱监督对象定位的擦除策略
IF 3.4 3区 工程技术
Signal Processing-Image Communication Pub Date : 2025-02-18 DOI: 10.1016/j.image.2025.117280
Yuming Fan , Shikui Wei , Chuangchuang Tan , Xiaotong Chen , Dongming Yang , Yao Zhao
{"title":"Rethinking erasing strategy on weakly supervised object localization","authors":"Yuming Fan ,&nbsp;Shikui Wei ,&nbsp;Chuangchuang Tan ,&nbsp;Xiaotong Chen ,&nbsp;Dongming Yang ,&nbsp;Yao Zhao","doi":"10.1016/j.image.2025.117280","DOIUrl":"10.1016/j.image.2025.117280","url":null,"abstract":"<div><div>Weakly supervised object localization (WSOL) is a challenging task that aims to locate object regions in images using image-level labels as supervision. Early research utilized erasing strategy to expand the localization regions. However, those methods usually adopt a fix threshold resulting in over- or under-fitting of the object region. Additionally, recent pseudo-label paradigm decouples the classification and localization tasks, causing confusion between foreground and background regions. In this paper, we propose the Soft-Erasing (SoE) method for Weakly Supervised Object Localization (WSOL). It includes two key modules: the Adaptive Erasing (AE) and Flip Erasing (FE). The AE module dynamically adjusts the erasing threshold using the object’s structural information, while the noise information module ensures the classifier focuses on the foreground region. The FE module effectively decouples object and background information by using normalization and inversion techniques. Additionally, we introduce activation loss and reverse loss to strengthen semantic consistency in foreground regions. Experiments on public datasets demonstrate that our SoE framework significantly improves localization accuracy, achieving 70.86% on GT-Known Loc for ILSVRC and 95.84% for CUB-200-2011.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"135 ","pages":"Article 117280"},"PeriodicalIF":3.4,"publicationDate":"2025-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143465446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DA-Net: Deep attention network for biomedical image segmentation DA-Net:用于生物医学图像分割的深度关注网络
IF 3.4 3区 工程技术
Signal Processing-Image Communication Pub Date : 2025-02-13 DOI: 10.1016/j.image.2025.117283
Yingyan Gu, Yan Wang, Hua Ye, Xin Shu
{"title":"DA-Net: Deep attention network for biomedical image segmentation","authors":"Yingyan Gu,&nbsp;Yan Wang,&nbsp;Hua Ye,&nbsp;Xin Shu","doi":"10.1016/j.image.2025.117283","DOIUrl":"10.1016/j.image.2025.117283","url":null,"abstract":"<div><div>Deep learning-based image segmentation techniques are of great significance to biomedical image analysis and clinical disease diagnosis, among which U-Net is one of the classic biomedical image segmentation algorithms and is widely used in the field of biomedicine. In this paper, we propose an improved triplet attention module and embed it into the U-Net framework to form a novel deep attention network, called DA-Net, for biomedical image segmentation. Specifically, an additional layer is stacked into the original U-Net, resulting in a six-layer U-shaped network. Then, the double convolution module of the U-Net is replaced with a composite block which consists of the improved triplet attention module and the residual concatenate block, to obtain abundant valuable features effectively. We redesign the network structure to increase its width and depth and train our model with the pixel position aware loss, realizing the synchronous increase of the mean IoU value and average Dice index. Extensive experiments have been carried out on two publicly available biomedical datasets, including the 2018 Data Science Bowl (DSB) and the international skin imaging collaboration (ISIC) 2018 Challenge, and a self-built fetal cerebellar ultrasound dataset from Affiliated Hospital of Jiangsu University, named JSUAH<img>Cerebellum. The mIoU and mDice of DA-Net can reach 87.45 % and 92.98 % on the JSUAH<img>Cerebellum, 87.36 % and 91.37 % on the 2018 Data Science Bowl, and 86.75 % and 91.34 % on the ISIC-2018 Challenge, respectively. Experimental results demonstrate that our DA-Net achieves promising performance in terms of segmentation robustness and generalization ability.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"135 ","pages":"Article 117283"},"PeriodicalIF":3.4,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143445379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
No-reference image quality assessment based on improved vision transformer and transfer learning 基于改进视觉变换和迁移学习的无参考图像质量评估
IF 3.4 3区 工程技术
Signal Processing-Image Communication Pub Date : 2025-02-11 DOI: 10.1016/j.image.2025.117282
Bo Zhang , Luoxi Wang , Cheng Zhang , Ran Zhao , Jinlu Sun
{"title":"No-reference image quality assessment based on improved vision transformer and transfer learning","authors":"Bo Zhang ,&nbsp;Luoxi Wang ,&nbsp;Cheng Zhang ,&nbsp;Ran Zhao ,&nbsp;Jinlu Sun","doi":"10.1016/j.image.2025.117282","DOIUrl":"10.1016/j.image.2025.117282","url":null,"abstract":"<div><div>To improve the accuracy and generalization performance of the existing no-reference image quality assessment models on small datasets, a no-reference image quality assessment model based on an improved vision transformer model and transfer learning is proposed. Firstly, ResNet is employed as a feature extraction network to obtain basic perceptual features from the input images, and a Convolutional Block Attention Module is introduced to further improve the network's feature extraction capabilities. Secondly, the Transformer Encoder is utilized to regress multi-layer features, improving the network's ability to capture global image information and predict scores. Lastly, to overcome the performance limitations of the Transformer model on small datasets, a transfer learning method is used to solve the dilemma of the relatively small capacity of the databases for image quality assessment. The model is trained and tested on three small-scale datasets and compared with seven mainstream algorithms. Performance is analyzed across three dimensions using statistical significance tests. The results show that, while the model does not perform best in distinguishing between similar and significantly different pairs, it still demonstrates competitive capabilities. Additionally, it performs exceptionally well in assessing quality differences and evaluating Area Under Curve, highlighting its strong potential for practical applications.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"135 ","pages":"Article 117282"},"PeriodicalIF":3.4,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143427755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Estimating the resize parameter in end-to-end learned image compression 端到端学习图像压缩中调整大小参数的估计
IF 3.4 3区 工程技术
Signal Processing-Image Communication Pub Date : 2025-02-06 DOI: 10.1016/j.image.2025.117277
Li-Heng Chen , Christos G. Bampis , Zhi Li , Lukáš Krasula , Alan C. Bovik
{"title":"Estimating the resize parameter in end-to-end learned image compression","authors":"Li-Heng Chen ,&nbsp;Christos G. Bampis ,&nbsp;Zhi Li ,&nbsp;Lukáš Krasula ,&nbsp;Alan C. Bovik","doi":"10.1016/j.image.2025.117277","DOIUrl":"10.1016/j.image.2025.117277","url":null,"abstract":"<div><div>We describe a search-free resizing framework that can further improve the rate–distortion tradeoff of recent learned image compression models. Our approach is simple: compose a pair of differentiable downsampling/upsampling layers that sandwich a neural compression model. To determine resize factors for different inputs, we utilize another neural network jointly trained with the compression model, with the end goal of minimizing the rate–distortion objective. Our results suggest that “compression friendly” downsampled representations can be quickly determined during encoding by using an auxiliary network and differentiable image warping. By conducting extensive experimental tests on existing deep image compression models, we show results that our new resizing parameter estimation framework can provide Bjøntegaard-Delta rate (BD-rate) improvement of about 10% against leading perceptual quality engines. We also carried out a subjective quality study, the results of which show that our new approach yields favorable compressed images. To facilitate reproducible research in this direction, the implementation used in this paper is being made freely available online at: <span><span>https://github.com/treammm/ResizeCompression</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"135 ","pages":"Article 117277"},"PeriodicalIF":3.4,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143387866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Full reference point cloud quality assessment using support vector regression 使用支持向量回归的完整参考点云质量评估
IF 3.4 3区 工程技术
Signal Processing-Image Communication Pub Date : 2025-02-01 DOI: 10.1016/j.image.2024.117239
Ryosuke Watanabe , Shashank N. Sridhara , Haoran Hong , Eduardo Pavez , Keisuke Nonaka , Tatsuya Kobayashi , Antonio Ortega
{"title":"Full reference point cloud quality assessment using support vector regression","authors":"Ryosuke Watanabe ,&nbsp;Shashank N. Sridhara ,&nbsp;Haoran Hong ,&nbsp;Eduardo Pavez ,&nbsp;Keisuke Nonaka ,&nbsp;Tatsuya Kobayashi ,&nbsp;Antonio Ortega","doi":"10.1016/j.image.2024.117239","DOIUrl":"10.1016/j.image.2024.117239","url":null,"abstract":"<div><div>Point clouds are a general format for representing realistic 3D objects in diverse 3D applications. Since point clouds have large data sizes, developing efficient point cloud compression methods is crucial. However, excessive compression leads to various distortions, which deteriorates the point cloud quality perceived by end users. Thus, establishing reliable point cloud quality assessment (PCQA) methods is essential as a benchmark to develop efficient compression methods. This paper presents an accurate full-reference point cloud quality assessment (FR-PCQA) method called full-reference quality assessment using support vector regression (FRSVR) for various types of degradations such as compression distortion, Gaussian noise, and down-sampling. The proposed method demonstrates accurate PCQA by integrating five FR-based metrics covering various types of errors (e.g., considering geometric distortion, color distortion, and point count) using support vector regression (SVR). Moreover, the proposed method achieves a superior trade-off between accuracy and calculation speed because it includes only the calculation of these five simple metrics and SVR, which can perform fast prediction. Experimental results with three types of open datasets show that the proposed method is more accurate than conventional FR-PCQA methods. In addition, the proposed method is faster than state-of-the-art methods that utilize complicated features such as curvature and multi-scale features. Thus, the proposed method provides excellent performance in terms of the accuracy of PCQA and processing speed. Our method is available from <span><span>https://github.com/STAC-USC/FRSVR-PCQA</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"131 ","pages":"Article 117239"},"PeriodicalIF":3.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143129356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bayesian framework based additive intrinsic components optimization deformable model for image segmentation 基于贝叶斯框架的可加性内禀分量优化变形图像分割模型
IF 3.4 3区 工程技术
Signal Processing-Image Communication Pub Date : 2025-02-01 DOI: 10.1016/j.image.2024.117238
Yanjun Ren , Dong Li , Liming Tang
{"title":"Bayesian framework based additive intrinsic components optimization deformable model for image segmentation","authors":"Yanjun Ren ,&nbsp;Dong Li ,&nbsp;Liming Tang","doi":"10.1016/j.image.2024.117238","DOIUrl":"10.1016/j.image.2024.117238","url":null,"abstract":"<div><div>The effectiveness of image segmentation can be greatly compromised by factors like inhomogeneity, low-resolution, and noise. Aiming at these challenges, we propose a new segmentation-oriented additive decomposition model for images. Firstly, the model assumes that the to be segmented image is the sum of three components: true image, bias field, and noise. Secondly, we pursue the true image in the image domain base on Bayesian framework, and establish the active contour model. In this model, the conditional probability is assumed to follow a local Gaussian distribution. The prior probability is constructed jointly by the following three assumptions. Specifically, we describe the true image as a Markov field defined as the Gibbs energy function. The bias field <span><math><mi>b</mi></math></span> is modeled as a Gaussian distribution with mean 0 and variance <span><math><msub><mrow><mi>σ</mi></mrow><mrow><mi>i</mi></mrow></msub></math></span>. In addition, as an alternative, we employ regularization to the evolution curve by means of heat kernel convolution function. Finally, the proposed multi-objective optimization model is solved numerically using variational and gradient descent algorithms. The effectiveness of the proposed model has been validated through experiments conducted on various images, including natural, degraded text document, and others. The results show that compared to the classical active contour model, our model improve across four evaluation metrics. Among these, the smallest increase is in the P value, at 5%, while the most significant improvement is in the JSC value, reaching 14%.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"131 ","pages":"Article 117238"},"PeriodicalIF":3.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143129353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lp-norm distortion-efficient adversarial attack 低范数扭曲效率对抗性攻击
IF 3.4 3区 工程技术
Signal Processing-Image Communication Pub Date : 2025-02-01 DOI: 10.1016/j.image.2024.117241
Chao Zhou , Yuan-Gen Wang , Zi-Jia Wang , Xiangui Kang
{"title":"Lp-norm distortion-efficient adversarial attack","authors":"Chao Zhou ,&nbsp;Yuan-Gen Wang ,&nbsp;Zi-Jia Wang ,&nbsp;Xiangui Kang","doi":"10.1016/j.image.2024.117241","DOIUrl":"10.1016/j.image.2024.117241","url":null,"abstract":"&lt;div&gt;&lt;div&gt;Adversarial examples have shown a powerful ability to make a well-trained model misclassified. Current mainstream adversarial attack methods only consider one of the distortions among &lt;span&gt;&lt;math&gt;&lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;L&lt;/mi&gt;&lt;/mrow&gt;&lt;mrow&gt;&lt;mn&gt;0&lt;/mn&gt;&lt;/mrow&gt;&lt;/msub&gt;&lt;/math&gt;&lt;/span&gt;-norm, &lt;span&gt;&lt;math&gt;&lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;L&lt;/mi&gt;&lt;/mrow&gt;&lt;mrow&gt;&lt;mn&gt;2&lt;/mn&gt;&lt;/mrow&gt;&lt;/msub&gt;&lt;/math&gt;&lt;/span&gt;-norm, and &lt;span&gt;&lt;math&gt;&lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;L&lt;/mi&gt;&lt;/mrow&gt;&lt;mrow&gt;&lt;mi&gt;∞&lt;/mi&gt;&lt;/mrow&gt;&lt;/msub&gt;&lt;/math&gt;&lt;/span&gt;-norm. &lt;span&gt;&lt;math&gt;&lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;L&lt;/mi&gt;&lt;/mrow&gt;&lt;mrow&gt;&lt;mn&gt;0&lt;/mn&gt;&lt;/mrow&gt;&lt;/msub&gt;&lt;/math&gt;&lt;/span&gt;-norm based methods cause large modification on a single pixel, resulting in naked-eye visible detection, while &lt;span&gt;&lt;math&gt;&lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;L&lt;/mi&gt;&lt;/mrow&gt;&lt;mrow&gt;&lt;mn&gt;2&lt;/mn&gt;&lt;/mrow&gt;&lt;/msub&gt;&lt;/math&gt;&lt;/span&gt;-norm and &lt;span&gt;&lt;math&gt;&lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;L&lt;/mi&gt;&lt;/mrow&gt;&lt;mrow&gt;&lt;mi&gt;∞&lt;/mi&gt;&lt;/mrow&gt;&lt;/msub&gt;&lt;/math&gt;&lt;/span&gt;-norm based methods suffer from weak robustness against adversarial defense since they always diffuse tiny perturbations to all pixels. A more realistic adversarial perturbation should be sparse and imperceptible. In this paper, we propose a novel &lt;span&gt;&lt;math&gt;&lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;L&lt;/mi&gt;&lt;/mrow&gt;&lt;mrow&gt;&lt;mi&gt;p&lt;/mi&gt;&lt;/mrow&gt;&lt;/msub&gt;&lt;/math&gt;&lt;/span&gt;-norm distortion-efficient adversarial attack, which not only owns the least &lt;span&gt;&lt;math&gt;&lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;L&lt;/mi&gt;&lt;/mrow&gt;&lt;mrow&gt;&lt;mn&gt;2&lt;/mn&gt;&lt;/mrow&gt;&lt;/msub&gt;&lt;/math&gt;&lt;/span&gt;-norm loss but also significantly reduces the &lt;span&gt;&lt;math&gt;&lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;L&lt;/mi&gt;&lt;/mrow&gt;&lt;mrow&gt;&lt;mn&gt;0&lt;/mn&gt;&lt;/mrow&gt;&lt;/msub&gt;&lt;/math&gt;&lt;/span&gt;-norm distortion. To this aim, we design a new optimization scheme, which first optimizes an initial adversarial perturbation under &lt;span&gt;&lt;math&gt;&lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;L&lt;/mi&gt;&lt;/mrow&gt;&lt;mrow&gt;&lt;mn&gt;2&lt;/mn&gt;&lt;/mrow&gt;&lt;/msub&gt;&lt;/math&gt;&lt;/span&gt;-norm constraint, and then constructs a dimension unimportance matrix for the initial perturbation. Such a dimension unimportance matrix can indicate the adversarial unimportance of each dimension of the initial perturbation. Furthermore, we introduce a new concept of adversarial threshold for the dimension unimportance matrix. The dimensions of the initial perturbation whose unimportance is higher than the threshold will be all set to zero, greatly decreasing the &lt;span&gt;&lt;math&gt;&lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;L&lt;/mi&gt;&lt;/mrow&gt;&lt;mrow&gt;&lt;mn&gt;0&lt;/mn&gt;&lt;/mrow&gt;&lt;/msub&gt;&lt;/math&gt;&lt;/span&gt;-norm distortion. Experimental results on three benchmark datasets show that under the same query budget, the adversarial examples generated by our method have lower &lt;span&gt;&lt;math&gt;&lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;L&lt;/mi&gt;&lt;/mrow&gt;&lt;mrow&gt;&lt;mn&gt;0&lt;/mn&gt;&lt;/mrow&gt;&lt;/msub&gt;&lt;/math&gt;&lt;/span&gt;-norm and &lt;span&gt;&lt;math&gt;&lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;L&lt;/mi&gt;&lt;/mrow&gt;&lt;mrow&gt;&lt;mn&gt;2&lt;/mn&gt;&lt;/mrow&gt;&lt;/msub&gt;&lt;/math&gt;&lt;/span&gt;-norm distortion than the state-of-the-art. Especially for the MNIST dataset, our attack reduces 8.1% &lt;span&gt;&lt;math&gt;&lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;L&lt;/mi&gt;&lt;/mrow&gt;&lt;mrow&gt;&lt;mn&gt;2&lt;/mn&gt;&lt;/mrow&gt;&lt;/msub&gt;&lt;/math&gt;&lt;/span&gt;-norm distortion meanwhile remaining 47% pixels unattacked. This demonstrates the superiority of the proposed method over its competitors in terms of adversarial robustness and visual imperceptibility. The code is avail","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"131 ","pages":"Article 117241"},"PeriodicalIF":3.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143129355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data-driven gradient priors integrated into blind image deblurring 数据驱动的梯度先验集成到盲图像去模糊
IF 3.4 3区 工程技术
Signal Processing-Image Communication Pub Date : 2025-01-28 DOI: 10.1016/j.image.2025.117275
Qing Qi , Jichang Guo , Chongyi Li
{"title":"Data-driven gradient priors integrated into blind image deblurring","authors":"Qing Qi ,&nbsp;Jichang Guo ,&nbsp;Chongyi Li","doi":"10.1016/j.image.2025.117275","DOIUrl":"10.1016/j.image.2025.117275","url":null,"abstract":"<div><div>Blind image deblurring is a severely ill-posed task. Most existing methods focus on deep learning to learn massive data features while ignoring the vital significance of classic image structure priors. We make extensive use of the image gradient information in a data-driven way. In this paper, we present a Generative Adversarial Network (GAN) architecture based on image structure priors for blind non-uniform image deblurring. Previous image deblurring methods employ Convolutional Neural Networks (CNNs) and non-blind deconvolution algorithms to predict kernel estimations and obtain deblurred images, respectively. We permeate the structure prior of images throughout the design of network architectures and target loss functions. To facilitate network optimization, we propose multi-term target loss functions aimed to supervise the generator to have images with significant structure attributes. In addition, we design a dual-discriminant mechanism for discriminating whether the image edge is clear or not. Not only image content but also the sharpness of image structures need to be discriminated. To learn image gradient features, we develop a dual-flow network that considers both the image and gradient domains to learn image gradient features. Our model directly avoids the accumulated errors caused by two steps of “kernel estimation-non-blind deconvolution”. Extensive experiments on both synthetic datasets and real-world images demonstrate that our model outperforms state-of-the-art methods.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"135 ","pages":"Article 117275"},"PeriodicalIF":3.4,"publicationDate":"2025-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143350862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DarkSegNet: Low-light semantic segmentation network based on image pyramid DarkSegNet:基于图像金字塔的微光语义分割网络
IF 3.4 3区 工程技术
Signal Processing-Image Communication Pub Date : 2025-01-23 DOI: 10.1016/j.image.2025.117265
Jintao Tan, Longyang Huang, Zhonghui Chen, Ruokun Qu, Chenglong Li
{"title":"DarkSegNet: Low-light semantic segmentation network based on image pyramid","authors":"Jintao Tan,&nbsp;Longyang Huang,&nbsp;Zhonghui Chen,&nbsp;Ruokun Qu,&nbsp;Chenglong Li","doi":"10.1016/j.image.2025.117265","DOIUrl":"10.1016/j.image.2025.117265","url":null,"abstract":"<div><div>In the domain of computer vision, the task of semantic segmentation for images captured under low-light conditions has proven to be a formidable challenge. To address this challenge, we introduce a novel low-light semantic segmentation model named DarkSegNet. The DarkSegNet model aims to deal with the problem of semantic segmentation of low-light images. It effectively mines potential information in images by combining image pyramid decomposition, spatial low-frequency attention (SLA) module, and channel low-frequency information enhancement (CLIE) module to achieve better low-light semantic segmentation performance. These components work synergistically to effectively extract latent information embedded within the low-light image, ultimately resulting in improved performance of low-light semantic segmentation. We conduct experiments on the UAV indoor low-light LLRGBD-real dataset. Compared to other mainstream semantic segmentation methods, DarkSegNet achieves the highest mIoU of 47.9% on the UAV indoor low-light LLRGBD-real dataset. It is worth emphasizing that our model implements end-to-end training, avoiding the need to design additional image enhancement modules. The DarkSegNet network holds significant potential for facilitating drone-based rescue operations in disaster-stricken environments.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"135 ","pages":"Article 117265"},"PeriodicalIF":3.4,"publicationDate":"2025-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143157474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UIEVUS: An underwater image enhancement method for various underwater scenes UIEVUS:一种用于各种水下场景的水下图像增强方法
IF 3.4 3区 工程技术
Signal Processing-Image Communication Pub Date : 2025-01-22 DOI: 10.1016/j.image.2025.117264
Siyi Ren , Xianqiang Bao , Tianjiang Wang , Xinghua Xu , Tao Ma , Kun Yu
{"title":"UIEVUS: An underwater image enhancement method for various underwater scenes","authors":"Siyi Ren ,&nbsp;Xianqiang Bao ,&nbsp;Tianjiang Wang ,&nbsp;Xinghua Xu ,&nbsp;Tao Ma ,&nbsp;Kun Yu","doi":"10.1016/j.image.2025.117264","DOIUrl":"10.1016/j.image.2025.117264","url":null,"abstract":"<div><div>Due to the scattering and absorption of light in water, underwater images commonly encounter degradation issues, such as color distortions and uneven brightness. To address these challenges, we introduce UIEVUS, an underwater image enhancement method designed for various underwater scenes. Building upon Retinex theory, our method implements an approach that combines Retinex decomposition with generative adversarial learning for targeted enhancement. The core innovation of UIEVUS lies in its ability to separately process and recover illumination and reflection maps before merging them into the final enhanced result. Specifically, the method first applies Retinex decomposition to separate the original underwater image into an illumination map (addressing uneven lighting) and a reflection map (addressing color distortion). The reflection map undergoes restoration through a lightweight encoder–decoder network that employs generative adversarial learning to recover color information. Concurrently, the illumination map receives enhancement guided by the reflection map, resulting in improved edges, details, brightness, and reduced noise. These enhanced components are then merged to produce the final result. Extensive experimental results demonstrate that UIEVUS achieves competitive performance against other comparative algorithms across various benchmark tests. Notably, our method strikes an optimal balance between computational efficiency and enhancement quality, making it suitable for practical UUV applications.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"135 ","pages":"Article 117264"},"PeriodicalIF":3.4,"publicationDate":"2025-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143157475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信