Signal Processing-Image Communication最新文献

筛选
英文 中文
Estimating the resize parameter in end-to-end learned image compression 端到端学习图像压缩中调整大小参数的估计
IF 3.4 3区 工程技术
Signal Processing-Image Communication Pub Date : 2025-02-06 DOI: 10.1016/j.image.2025.117277
Li-Heng Chen , Christos G. Bampis , Zhi Li , Lukáš Krasula , Alan C. Bovik
{"title":"Estimating the resize parameter in end-to-end learned image compression","authors":"Li-Heng Chen ,&nbsp;Christos G. Bampis ,&nbsp;Zhi Li ,&nbsp;Lukáš Krasula ,&nbsp;Alan C. Bovik","doi":"10.1016/j.image.2025.117277","DOIUrl":"10.1016/j.image.2025.117277","url":null,"abstract":"<div><div>We describe a search-free resizing framework that can further improve the rate–distortion tradeoff of recent learned image compression models. Our approach is simple: compose a pair of differentiable downsampling/upsampling layers that sandwich a neural compression model. To determine resize factors for different inputs, we utilize another neural network jointly trained with the compression model, with the end goal of minimizing the rate–distortion objective. Our results suggest that “compression friendly” downsampled representations can be quickly determined during encoding by using an auxiliary network and differentiable image warping. By conducting extensive experimental tests on existing deep image compression models, we show results that our new resizing parameter estimation framework can provide Bjøntegaard-Delta rate (BD-rate) improvement of about 10% against leading perceptual quality engines. We also carried out a subjective quality study, the results of which show that our new approach yields favorable compressed images. To facilitate reproducible research in this direction, the implementation used in this paper is being made freely available online at: <span><span>https://github.com/treammm/ResizeCompression</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"135 ","pages":"Article 117277"},"PeriodicalIF":3.4,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143387866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Full reference point cloud quality assessment using support vector regression 使用支持向量回归的完整参考点云质量评估
IF 3.4 3区 工程技术
Signal Processing-Image Communication Pub Date : 2025-02-01 DOI: 10.1016/j.image.2024.117239
Ryosuke Watanabe , Shashank N. Sridhara , Haoran Hong , Eduardo Pavez , Keisuke Nonaka , Tatsuya Kobayashi , Antonio Ortega
{"title":"Full reference point cloud quality assessment using support vector regression","authors":"Ryosuke Watanabe ,&nbsp;Shashank N. Sridhara ,&nbsp;Haoran Hong ,&nbsp;Eduardo Pavez ,&nbsp;Keisuke Nonaka ,&nbsp;Tatsuya Kobayashi ,&nbsp;Antonio Ortega","doi":"10.1016/j.image.2024.117239","DOIUrl":"10.1016/j.image.2024.117239","url":null,"abstract":"<div><div>Point clouds are a general format for representing realistic 3D objects in diverse 3D applications. Since point clouds have large data sizes, developing efficient point cloud compression methods is crucial. However, excessive compression leads to various distortions, which deteriorates the point cloud quality perceived by end users. Thus, establishing reliable point cloud quality assessment (PCQA) methods is essential as a benchmark to develop efficient compression methods. This paper presents an accurate full-reference point cloud quality assessment (FR-PCQA) method called full-reference quality assessment using support vector regression (FRSVR) for various types of degradations such as compression distortion, Gaussian noise, and down-sampling. The proposed method demonstrates accurate PCQA by integrating five FR-based metrics covering various types of errors (e.g., considering geometric distortion, color distortion, and point count) using support vector regression (SVR). Moreover, the proposed method achieves a superior trade-off between accuracy and calculation speed because it includes only the calculation of these five simple metrics and SVR, which can perform fast prediction. Experimental results with three types of open datasets show that the proposed method is more accurate than conventional FR-PCQA methods. In addition, the proposed method is faster than state-of-the-art methods that utilize complicated features such as curvature and multi-scale features. Thus, the proposed method provides excellent performance in terms of the accuracy of PCQA and processing speed. Our method is available from <span><span>https://github.com/STAC-USC/FRSVR-PCQA</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"131 ","pages":"Article 117239"},"PeriodicalIF":3.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143129356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bayesian framework based additive intrinsic components optimization deformable model for image segmentation 基于贝叶斯框架的可加性内禀分量优化变形图像分割模型
IF 3.4 3区 工程技术
Signal Processing-Image Communication Pub Date : 2025-02-01 DOI: 10.1016/j.image.2024.117238
Yanjun Ren , Dong Li , Liming Tang
{"title":"Bayesian framework based additive intrinsic components optimization deformable model for image segmentation","authors":"Yanjun Ren ,&nbsp;Dong Li ,&nbsp;Liming Tang","doi":"10.1016/j.image.2024.117238","DOIUrl":"10.1016/j.image.2024.117238","url":null,"abstract":"<div><div>The effectiveness of image segmentation can be greatly compromised by factors like inhomogeneity, low-resolution, and noise. Aiming at these challenges, we propose a new segmentation-oriented additive decomposition model for images. Firstly, the model assumes that the to be segmented image is the sum of three components: true image, bias field, and noise. Secondly, we pursue the true image in the image domain base on Bayesian framework, and establish the active contour model. In this model, the conditional probability is assumed to follow a local Gaussian distribution. The prior probability is constructed jointly by the following three assumptions. Specifically, we describe the true image as a Markov field defined as the Gibbs energy function. The bias field <span><math><mi>b</mi></math></span> is modeled as a Gaussian distribution with mean 0 and variance <span><math><msub><mrow><mi>σ</mi></mrow><mrow><mi>i</mi></mrow></msub></math></span>. In addition, as an alternative, we employ regularization to the evolution curve by means of heat kernel convolution function. Finally, the proposed multi-objective optimization model is solved numerically using variational and gradient descent algorithms. The effectiveness of the proposed model has been validated through experiments conducted on various images, including natural, degraded text document, and others. The results show that compared to the classical active contour model, our model improve across four evaluation metrics. Among these, the smallest increase is in the P value, at 5%, while the most significant improvement is in the JSC value, reaching 14%.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"131 ","pages":"Article 117238"},"PeriodicalIF":3.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143129353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lp-norm distortion-efficient adversarial attack 低范数扭曲效率对抗性攻击
IF 3.4 3区 工程技术
Signal Processing-Image Communication Pub Date : 2025-02-01 DOI: 10.1016/j.image.2024.117241
Chao Zhou , Yuan-Gen Wang , Zi-Jia Wang , Xiangui Kang
{"title":"Lp-norm distortion-efficient adversarial attack","authors":"Chao Zhou ,&nbsp;Yuan-Gen Wang ,&nbsp;Zi-Jia Wang ,&nbsp;Xiangui Kang","doi":"10.1016/j.image.2024.117241","DOIUrl":"10.1016/j.image.2024.117241","url":null,"abstract":"&lt;div&gt;&lt;div&gt;Adversarial examples have shown a powerful ability to make a well-trained model misclassified. Current mainstream adversarial attack methods only consider one of the distortions among &lt;span&gt;&lt;math&gt;&lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;L&lt;/mi&gt;&lt;/mrow&gt;&lt;mrow&gt;&lt;mn&gt;0&lt;/mn&gt;&lt;/mrow&gt;&lt;/msub&gt;&lt;/math&gt;&lt;/span&gt;-norm, &lt;span&gt;&lt;math&gt;&lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;L&lt;/mi&gt;&lt;/mrow&gt;&lt;mrow&gt;&lt;mn&gt;2&lt;/mn&gt;&lt;/mrow&gt;&lt;/msub&gt;&lt;/math&gt;&lt;/span&gt;-norm, and &lt;span&gt;&lt;math&gt;&lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;L&lt;/mi&gt;&lt;/mrow&gt;&lt;mrow&gt;&lt;mi&gt;∞&lt;/mi&gt;&lt;/mrow&gt;&lt;/msub&gt;&lt;/math&gt;&lt;/span&gt;-norm. &lt;span&gt;&lt;math&gt;&lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;L&lt;/mi&gt;&lt;/mrow&gt;&lt;mrow&gt;&lt;mn&gt;0&lt;/mn&gt;&lt;/mrow&gt;&lt;/msub&gt;&lt;/math&gt;&lt;/span&gt;-norm based methods cause large modification on a single pixel, resulting in naked-eye visible detection, while &lt;span&gt;&lt;math&gt;&lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;L&lt;/mi&gt;&lt;/mrow&gt;&lt;mrow&gt;&lt;mn&gt;2&lt;/mn&gt;&lt;/mrow&gt;&lt;/msub&gt;&lt;/math&gt;&lt;/span&gt;-norm and &lt;span&gt;&lt;math&gt;&lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;L&lt;/mi&gt;&lt;/mrow&gt;&lt;mrow&gt;&lt;mi&gt;∞&lt;/mi&gt;&lt;/mrow&gt;&lt;/msub&gt;&lt;/math&gt;&lt;/span&gt;-norm based methods suffer from weak robustness against adversarial defense since they always diffuse tiny perturbations to all pixels. A more realistic adversarial perturbation should be sparse and imperceptible. In this paper, we propose a novel &lt;span&gt;&lt;math&gt;&lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;L&lt;/mi&gt;&lt;/mrow&gt;&lt;mrow&gt;&lt;mi&gt;p&lt;/mi&gt;&lt;/mrow&gt;&lt;/msub&gt;&lt;/math&gt;&lt;/span&gt;-norm distortion-efficient adversarial attack, which not only owns the least &lt;span&gt;&lt;math&gt;&lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;L&lt;/mi&gt;&lt;/mrow&gt;&lt;mrow&gt;&lt;mn&gt;2&lt;/mn&gt;&lt;/mrow&gt;&lt;/msub&gt;&lt;/math&gt;&lt;/span&gt;-norm loss but also significantly reduces the &lt;span&gt;&lt;math&gt;&lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;L&lt;/mi&gt;&lt;/mrow&gt;&lt;mrow&gt;&lt;mn&gt;0&lt;/mn&gt;&lt;/mrow&gt;&lt;/msub&gt;&lt;/math&gt;&lt;/span&gt;-norm distortion. To this aim, we design a new optimization scheme, which first optimizes an initial adversarial perturbation under &lt;span&gt;&lt;math&gt;&lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;L&lt;/mi&gt;&lt;/mrow&gt;&lt;mrow&gt;&lt;mn&gt;2&lt;/mn&gt;&lt;/mrow&gt;&lt;/msub&gt;&lt;/math&gt;&lt;/span&gt;-norm constraint, and then constructs a dimension unimportance matrix for the initial perturbation. Such a dimension unimportance matrix can indicate the adversarial unimportance of each dimension of the initial perturbation. Furthermore, we introduce a new concept of adversarial threshold for the dimension unimportance matrix. The dimensions of the initial perturbation whose unimportance is higher than the threshold will be all set to zero, greatly decreasing the &lt;span&gt;&lt;math&gt;&lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;L&lt;/mi&gt;&lt;/mrow&gt;&lt;mrow&gt;&lt;mn&gt;0&lt;/mn&gt;&lt;/mrow&gt;&lt;/msub&gt;&lt;/math&gt;&lt;/span&gt;-norm distortion. Experimental results on three benchmark datasets show that under the same query budget, the adversarial examples generated by our method have lower &lt;span&gt;&lt;math&gt;&lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;L&lt;/mi&gt;&lt;/mrow&gt;&lt;mrow&gt;&lt;mn&gt;0&lt;/mn&gt;&lt;/mrow&gt;&lt;/msub&gt;&lt;/math&gt;&lt;/span&gt;-norm and &lt;span&gt;&lt;math&gt;&lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;L&lt;/mi&gt;&lt;/mrow&gt;&lt;mrow&gt;&lt;mn&gt;2&lt;/mn&gt;&lt;/mrow&gt;&lt;/msub&gt;&lt;/math&gt;&lt;/span&gt;-norm distortion than the state-of-the-art. Especially for the MNIST dataset, our attack reduces 8.1% &lt;span&gt;&lt;math&gt;&lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;L&lt;/mi&gt;&lt;/mrow&gt;&lt;mrow&gt;&lt;mn&gt;2&lt;/mn&gt;&lt;/mrow&gt;&lt;/msub&gt;&lt;/math&gt;&lt;/span&gt;-norm distortion meanwhile remaining 47% pixels unattacked. This demonstrates the superiority of the proposed method over its competitors in terms of adversarial robustness and visual imperceptibility. The code is avail","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"131 ","pages":"Article 117241"},"PeriodicalIF":3.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143129355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data-driven gradient priors integrated into blind image deblurring 数据驱动的梯度先验集成到盲图像去模糊
IF 3.4 3区 工程技术
Signal Processing-Image Communication Pub Date : 2025-01-28 DOI: 10.1016/j.image.2025.117275
Qing Qi , Jichang Guo , Chongyi Li
{"title":"Data-driven gradient priors integrated into blind image deblurring","authors":"Qing Qi ,&nbsp;Jichang Guo ,&nbsp;Chongyi Li","doi":"10.1016/j.image.2025.117275","DOIUrl":"10.1016/j.image.2025.117275","url":null,"abstract":"<div><div>Blind image deblurring is a severely ill-posed task. Most existing methods focus on deep learning to learn massive data features while ignoring the vital significance of classic image structure priors. We make extensive use of the image gradient information in a data-driven way. In this paper, we present a Generative Adversarial Network (GAN) architecture based on image structure priors for blind non-uniform image deblurring. Previous image deblurring methods employ Convolutional Neural Networks (CNNs) and non-blind deconvolution algorithms to predict kernel estimations and obtain deblurred images, respectively. We permeate the structure prior of images throughout the design of network architectures and target loss functions. To facilitate network optimization, we propose multi-term target loss functions aimed to supervise the generator to have images with significant structure attributes. In addition, we design a dual-discriminant mechanism for discriminating whether the image edge is clear or not. Not only image content but also the sharpness of image structures need to be discriminated. To learn image gradient features, we develop a dual-flow network that considers both the image and gradient domains to learn image gradient features. Our model directly avoids the accumulated errors caused by two steps of “kernel estimation-non-blind deconvolution”. Extensive experiments on both synthetic datasets and real-world images demonstrate that our model outperforms state-of-the-art methods.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"135 ","pages":"Article 117275"},"PeriodicalIF":3.4,"publicationDate":"2025-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143350862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DarkSegNet: Low-light semantic segmentation network based on image pyramid DarkSegNet:基于图像金字塔的微光语义分割网络
IF 3.4 3区 工程技术
Signal Processing-Image Communication Pub Date : 2025-01-23 DOI: 10.1016/j.image.2025.117265
Jintao Tan, Longyang Huang, Zhonghui Chen, Ruokun Qu, Chenglong Li
{"title":"DarkSegNet: Low-light semantic segmentation network based on image pyramid","authors":"Jintao Tan,&nbsp;Longyang Huang,&nbsp;Zhonghui Chen,&nbsp;Ruokun Qu,&nbsp;Chenglong Li","doi":"10.1016/j.image.2025.117265","DOIUrl":"10.1016/j.image.2025.117265","url":null,"abstract":"<div><div>In the domain of computer vision, the task of semantic segmentation for images captured under low-light conditions has proven to be a formidable challenge. To address this challenge, we introduce a novel low-light semantic segmentation model named DarkSegNet. The DarkSegNet model aims to deal with the problem of semantic segmentation of low-light images. It effectively mines potential information in images by combining image pyramid decomposition, spatial low-frequency attention (SLA) module, and channel low-frequency information enhancement (CLIE) module to achieve better low-light semantic segmentation performance. These components work synergistically to effectively extract latent information embedded within the low-light image, ultimately resulting in improved performance of low-light semantic segmentation. We conduct experiments on the UAV indoor low-light LLRGBD-real dataset. Compared to other mainstream semantic segmentation methods, DarkSegNet achieves the highest mIoU of 47.9% on the UAV indoor low-light LLRGBD-real dataset. It is worth emphasizing that our model implements end-to-end training, avoiding the need to design additional image enhancement modules. The DarkSegNet network holds significant potential for facilitating drone-based rescue operations in disaster-stricken environments.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"135 ","pages":"Article 117265"},"PeriodicalIF":3.4,"publicationDate":"2025-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143157474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UIEVUS: An underwater image enhancement method for various underwater scenes UIEVUS:一种用于各种水下场景的水下图像增强方法
IF 3.4 3区 工程技术
Signal Processing-Image Communication Pub Date : 2025-01-22 DOI: 10.1016/j.image.2025.117264
Siyi Ren , Xianqiang Bao , Tianjiang Wang , Xinghua Xu , Tao Ma , Kun Yu
{"title":"UIEVUS: An underwater image enhancement method for various underwater scenes","authors":"Siyi Ren ,&nbsp;Xianqiang Bao ,&nbsp;Tianjiang Wang ,&nbsp;Xinghua Xu ,&nbsp;Tao Ma ,&nbsp;Kun Yu","doi":"10.1016/j.image.2025.117264","DOIUrl":"10.1016/j.image.2025.117264","url":null,"abstract":"<div><div>Due to the scattering and absorption of light in water, underwater images commonly encounter degradation issues, such as color distortions and uneven brightness. To address these challenges, we introduce UIEVUS, an underwater image enhancement method designed for various underwater scenes. Building upon Retinex theory, our method implements an approach that combines Retinex decomposition with generative adversarial learning for targeted enhancement. The core innovation of UIEVUS lies in its ability to separately process and recover illumination and reflection maps before merging them into the final enhanced result. Specifically, the method first applies Retinex decomposition to separate the original underwater image into an illumination map (addressing uneven lighting) and a reflection map (addressing color distortion). The reflection map undergoes restoration through a lightweight encoder–decoder network that employs generative adversarial learning to recover color information. Concurrently, the illumination map receives enhancement guided by the reflection map, resulting in improved edges, details, brightness, and reduced noise. These enhanced components are then merged to produce the final result. Extensive experimental results demonstrate that UIEVUS achieves competitive performance against other comparative algorithms across various benchmark tests. Notably, our method strikes an optimal balance between computational efficiency and enhancement quality, making it suitable for practical UUV applications.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"135 ","pages":"Article 117264"},"PeriodicalIF":3.4,"publicationDate":"2025-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143157475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Efficient Dehazing Method Using Pixel Unshuffle and Color Correction 一种基于像素解洗和颜色校正的有效去雾方法
IF 3.4 3区 工程技术
Signal Processing-Image Communication Pub Date : 2025-01-19 DOI: 10.1016/j.image.2025.117260
Hongyuan Jing , Kaiyan Wang , Zhiwei Zhu , Aidong Chen , Chen Hong , Mengmeng Zhang
{"title":"An Efficient Dehazing Method Using Pixel Unshuffle and Color Correction","authors":"Hongyuan Jing ,&nbsp;Kaiyan Wang ,&nbsp;Zhiwei Zhu ,&nbsp;Aidong Chen ,&nbsp;Chen Hong ,&nbsp;Mengmeng Zhang","doi":"10.1016/j.image.2025.117260","DOIUrl":"10.1016/j.image.2025.117260","url":null,"abstract":"<div><div>Severe weather conditions such as haze and rainstorm will lead to serious degradation of observed images, which will influence the performance of advanced visual tasks such as target detection. However, most of the existing image processing methods focus on dehazing while overlooking the restoration of image color and details. In this paper, we found that the variance of the RGB three channels of a pixel at a certain point in an RGB image is related to its corresponding degree of color brightness through a large number of experiments, and propose an efficient dehazing method called PUCCNet, which utilizes Pixel Unshuffle and Color Correction to enhance image detail information and improve color saturation. We designed a Detail Recover Block (DRB) in the network to capture the details of the input image and focus on local details through the attention mechanism. In the high-dimensional part of the network, a Depth Local Global Residual Block (DLGRB) is introduced, which can simultaneously handle local and global features, thereby enhancing the model's expressive capability, improving its generalization ability, and reducing the risk of overfitting. The network obtains local details through the attention mechanism, and makes the output image of higher quality through color correction, which is aligned with the human visual system. Extensive experiments on synthetic datasets and real-world datasets demonstrate that the proposed method outperforms existing state-of-the-art methods.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"134 ","pages":"Article 117260"},"PeriodicalIF":3.4,"publicationDate":"2025-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143171432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Scene recovery with detail-preserving 保留细节的场景恢复
IF 3.4 3区 工程技术
Signal Processing-Image Communication Pub Date : 2025-01-17 DOI: 10.1016/j.image.2025.117266
Tingting Wu , Xinru Wang , Jun Liu , Tieyong Zeng
{"title":"Scene recovery with detail-preserving","authors":"Tingting Wu ,&nbsp;Xinru Wang ,&nbsp;Jun Liu ,&nbsp;Tieyong Zeng","doi":"10.1016/j.image.2025.117266","DOIUrl":"10.1016/j.image.2025.117266","url":null,"abstract":"<div><div>Images captured in sandstorms, hazy, snowy or underwater conditions often suffer from poor visibility. This is mainly due to the presence of atmospheric particles that scatter light. Based on the assumption of highly linear correlation between <span><math><msub><mrow><mi>S</mi></mrow><mrow><mi>n</mi><mi>u</mi></mrow></msub></math></span> and the observed intensity <span><math><mi>I</mi></math></span>, we first estimate the scattering map <span><math><mover><mrow><mi>t</mi></mrow><mrow><mo>̃</mo></mrow></mover></math></span> by projecting the input image <span><math><mi>I</mi></math></span> onto the unified spectrum <span><math><msub><mrow><mi>S</mi></mrow><mrow><mi>n</mi><mi>u</mi></mrow></msub></math></span>. We then apply the weighted guided image filter to make the corresponding transmission map <span><math><mi>t</mi></math></span> more accurate so that details and textures of the input image can be better recovered. Since the atmospheric light <span><math><mi>A</mi></math></span> is also critical to the scene recovery, we propose to use the quad-tree subdivision to extract a correct <span><math><mi>A</mi></math></span>. The quantitative and qualitative evaluations are reported in the numerical experiments. Compared with some SOTA methods, the images recovered by our method exhibit better visibility while preserving details.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"134 ","pages":"Article 117266"},"PeriodicalIF":3.4,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143171427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PointPCA+: A full-reference Point Cloud Quality Assessment metric with PCA-based features PointPCA+:具有基于pca的功能的完整参考点云质量评估度量
IF 3.4 3区 工程技术
Signal Processing-Image Communication Pub Date : 2025-01-17 DOI: 10.1016/j.image.2025.117262
Xuemei Zhou , Evangelos Alexiou , Irene Viola , Pablo Cesar
{"title":"PointPCA+: A full-reference Point Cloud Quality Assessment metric with PCA-based features","authors":"Xuemei Zhou ,&nbsp;Evangelos Alexiou ,&nbsp;Irene Viola ,&nbsp;Pablo Cesar","doi":"10.1016/j.image.2025.117262","DOIUrl":"10.1016/j.image.2025.117262","url":null,"abstract":"<div><div>This paper introduces an enhanced Point Cloud Quality Assessment (PCQA) metric, termed PointPCA+, as an extension of PointPCA, with a focus on computational simplicity and feature richness. PointPCA+ refines the original PCA-based descriptors by employing Principal Component Analysis (PCA) solely on geometry data; additionally, the texture descriptors are refined through a direct application of the function on YCbCr values, enhancing the efficiency of computation. The metric combines geometry and texture features, capturing local shape and appearance properties, through a learning-based fusion to generate a total quality score. Prior to fusion, a feature selection module is incorporated to identify the most effective features from a proposed super-set. Experimental results demonstrate the high predictive performance of PointPCA+ against subjective ground truth scores obtained from four publicly available datasets. The metric consistently outperforms state-of-the-art solutions, offering valuable insights into the design of similarity measurements and the effectiveness of handcrafted features across various distortion types. The code of the proposed metric is available at <span><span>https://github.com/cwi-dis/pointpca_suite/</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"135 ","pages":"Article 117262"},"PeriodicalIF":3.4,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143157473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信