IEEE Transactions on Computational Imaging最新文献

筛选
英文 中文
Adaptive Luminance Enhancement and High-Fidelity Color Correction for Low-Light Image Enhancement 低光图像增强的自适应亮度增强和高保真色彩校正
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-04-24 DOI: 10.1109/TCI.2025.3564112
Yuzhen Niu;Fusheng Li;Yuezhou Li;Siling Chen;Yuzhong Chen
{"title":"Adaptive Luminance Enhancement and High-Fidelity Color Correction for Low-Light Image Enhancement","authors":"Yuzhen Niu;Fusheng Li;Yuezhou Li;Siling Chen;Yuzhong Chen","doi":"10.1109/TCI.2025.3564112","DOIUrl":"https://doi.org/10.1109/TCI.2025.3564112","url":null,"abstract":"It is a challenging task to obtain high-quality images in low-light scenarios. While existing low-light image enhancement methods learn the mapping from low-light to clear images, such a straightforward approach lacks the targeted design for real-world scenarios, hampering their practical utility. As a result, issues such as overexposure and color distortion are likely to arise when processing images in uneven luminance or extreme darkness. To address these issues, we propose an adaptive luminance enhancement and high-fidelity color correction network (LCNet), which adopts a strategy of enhancing luminance first and then correcting color. Specifically, in the adaptive luminance enhancement stage, we design a multi-stage dual attention residual module (MDARM), which incorporates parallel spatial and channel attention mechanisms within residual blocks. This module extracts luminance prior from the low-light image to adaptively enhance luminance, while suppressing overexposure in areas with sufficient luminance. In the high-fidelity color correction stage, we design a progressive multi-scale feature fusion module (PMFFM) that combines progressively stage-wise multi-scale feature fusion with long/short skip connections, enabling thorough interaction between features at different scales across stages. This module extracts and fuses color features with varying receptive fields to ensure accurate and consistent color correction. Furthermore, we introduce a multi-color-space loss to effectively constrain the color correction. These two stages together produce high-quality images with appropriate luminance and high-fidelity color. Extensive experiments on both low-level and high-level tasks demonstrate that our LCNet outperforms state-of-the-art methods and achieves superior performance for low-light image enhancement in real-world scenarios.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"732-747"},"PeriodicalIF":4.2,"publicationDate":"2025-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144196897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Score-Based Generative Model With Conditional Null-Space Learning for Limited-Angle Tomographic Reconstruction in Medical Imaging 基于分数的条件零空间学习生成模型用于医学成像中有限角度层析重建
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-04-17 DOI: 10.1109/TCI.2025.3562059
Genyuan Zhang;Zihao Wang;Haijun Yu;Song Ni;Haixia Xie;Qiegen Liu;Fenglin Liu;Shaoyu Wang
{"title":"Score-Based Generative Model With Conditional Null-Space Learning for Limited-Angle Tomographic Reconstruction in Medical Imaging","authors":"Genyuan Zhang;Zihao Wang;Haijun Yu;Song Ni;Haixia Xie;Qiegen Liu;Fenglin Liu;Shaoyu Wang","doi":"10.1109/TCI.2025.3562059","DOIUrl":"https://doi.org/10.1109/TCI.2025.3562059","url":null,"abstract":"Limited-angle computed tomography (LA-CT) reconstruction represents a typically ill-posed inverse problem, frequently resulting in reconstructions with noticeable edge divergence and missing features. Score-based generative models (SGMs) based reconstruction methods have shown strong ability to reconstruct high-fidelity images for LA-CT. Data consistency is crucial for generating reliable and high-quality results in SGMs-based reconstruction methods. However, existing deep reconstruction methods have not fully explored data consistency, resulting in suboptimal performance. Based on this, we proposed a Conditional Score-based Null-space (CSN) generative model for LA-CT reconstruction. First, CSN integrates prior physical information of limited-angle scanning as conditional constraint, which can enable SGMs to obtain more accurate generation. Second, in order to balance the consistency and realness of the reconstruction results, the range-null space decomposition strategy is introduced in the sampling process. This strategy ensures that the estimation of the information occurs only in the null-space. Finally, we employ the sparse least square (LSQR) instead of commonly used consistency terms such as simultaneous iterative reconstruction technique (SIRT), thereby achieving superior reconstruction results. In addition, a mathematical convergence analysis of our CSN method is provided. Experimental evaluations on both numerical simulations and real-world datasets demonstrate that the proposed method offers notable advantages in reconstruction quality.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"556-569"},"PeriodicalIF":4.2,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143918784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Diff-Holo: A Residual Diffusion Model With Complex Transformer for Rapid Single-Frame Hologram Reconstruction Diff-Holo:一种用于快速单帧全息重建的复杂变压器残余扩散模型
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-04-16 DOI: 10.1109/TCI.2025.3561683
Ziqi Bai;Xianming Liu;Cheng Guo;Kui Jiang;Junjun Jiang;Xiangyang Ji
{"title":"Diff-Holo: A Residual Diffusion Model With Complex Transformer for Rapid Single-Frame Hologram Reconstruction","authors":"Ziqi Bai;Xianming Liu;Cheng Guo;Kui Jiang;Junjun Jiang;Xiangyang Ji","doi":"10.1109/TCI.2025.3561683","DOIUrl":"https://doi.org/10.1109/TCI.2025.3561683","url":null,"abstract":"Deep learning approaches have gained significant traction in holographic imaging, with diffusion models—an emerging class of deep generative models—showing particular promise in hologram reconstruction. Unlike conventional neural networks that directly generate outputs, diffusion models gradually add noise to data and train neural networks to remove it, enabling them to learn implicit priors of the underlying data distribution. However, current diffusion-based hologram reconstruction methods often require hundreds or even thousands of iterations to achieve high-fidelity results, leading to processing times of several minutes or more—falling short of the fast imaging demands of holographic systems. To address this, we propose <italic>Diff-Holo</i>, a residual diffusion model integrated with a complex transformer, designed for rapid and high-quality single-frame hologram reconstruction. Specifically, we create a shorter and more efficient Markov chain by controlling the residuals between clean images and those degraded by twin-image artifacts. Additionally, we incorporate complex-valued priors into the network by using a complex window-based transformer as the backbone, enhancing the network's ability to process complex-valued data in the reverse reconstruction process. Experimental results demonstrate that Diff-Holo achieves high-quality single-frame reconstructions in as few as 15 sampling steps, reducing reconstruction time from minutes to under 2.2 seconds.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"689-703"},"PeriodicalIF":4.2,"publicationDate":"2025-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144171030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Scale Cascaded With Cross-Attention Network-Based Deformation Vector Field Estimation for Motion-Compensated 4D-CBCT Reconstruction 基于多尺度级联交叉注意网络的运动补偿4D-CBCT重建变形向量场估计
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-04-16 DOI: 10.1109/TCI.2025.3561660
Peng Yuan;Fei Lyu;Zhiqiang Gao;Chunfeng Yang;Dianlin Hu;Jian Zhu;Zhan Wu;Tianling Lyu;Wei Zhao;Jianmin Dong;Yang Chen
{"title":"Multi-Scale Cascaded With Cross-Attention Network-Based Deformation Vector Field Estimation for Motion-Compensated 4D-CBCT Reconstruction","authors":"Peng Yuan;Fei Lyu;Zhiqiang Gao;Chunfeng Yang;Dianlin Hu;Jian Zhu;Zhan Wu;Tianling Lyu;Wei Zhao;Jianmin Dong;Yang Chen","doi":"10.1109/TCI.2025.3561660","DOIUrl":"https://doi.org/10.1109/TCI.2025.3561660","url":null,"abstract":"Four-Dimensional Cone Beam Computed Tomography (4D-CBCT) imaging technology offers enhanced image quality and spatial resolution for intraoperative guidance, facilitating real-time tracking of tumor position changes during radiotherapy. However, this is still a task of great challenges due to insufficient projections at each respiratory phase after phase-sorting, and the image phases reconstructed directly from phase-sorted data are discrete and discontinuous. To generate high-quality 4D-CBCT deformation vector fields (DVFs), this paper leverages the preoperative static prior image to guide intraoperative dynamic sparse-view reconstruction images for reducing anatomical structure differences, ultimately achieving continuous and dynamic 4D-CBCT imaging. In this paper, we propose a Multi-scale Cascaded residual deformable vector field estimation framework based on Cross-attention in Motion-compensated 4D-CBCT reconstruction (MCCM), which combines Multi-Scale Cascaded residual registration network (MSC-Net), Cross-Attention Enhanced feature Fusion (CAEF) module and Structure-Enhanced Motion-Compensated (SEMC) module: 1) the MCCM employs a multi-scale cascaded residual network strategy, merging multi-receptive fields and multi-resolution feature maps for large-scale internal changes. 2) the CAEF is embedded into MSC-Net to facilitate effective communication and learning between features and promote the flow of information. 3) the SEMC is developed to reduce artifacts after intraoperative CBCT sparse-view reconstruction, restore global lung motion changes and local details, and enhance structural information through image fusion. The proposed method has been evaluated using simulated and clinical datasets and has shown promising results by comparative experiment. Our approach exhibits significant improvements across various evaluation metrics.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"717-731"},"PeriodicalIF":4.2,"publicationDate":"2025-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144171032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PAH2T-Former: Paired-Attention Hybrid Hierarchical Transformer for Synergistically Enhanced FMT Reconstruction Quality and Efficiency PAH2T-Former:用于协同提高FMT重建质量和效率的配对注意混合分层变压器
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-04-09 DOI: 10.1109/TCI.2025.3559431
Peng Zhang;Xingyu Liu;Qianqian Xue;Yu Shang;Chen Liu;Ruhao Chen;Honglei Gao;Jiye Liang;Wenjian Wang;Guanglei Zhang
{"title":"PAH2T-Former: Paired-Attention Hybrid Hierarchical Transformer for Synergistically Enhanced FMT Reconstruction Quality and Efficiency","authors":"Peng Zhang;Xingyu Liu;Qianqian Xue;Yu Shang;Chen Liu;Ruhao Chen;Honglei Gao;Jiye Liang;Wenjian Wang;Guanglei Zhang","doi":"10.1109/TCI.2025.3559431","DOIUrl":"https://doi.org/10.1109/TCI.2025.3559431","url":null,"abstract":"Fluorescence molecular tomography (FMT) is a sensitive optical imaging technique that can achieve three-dimensional (3D) tomographic images at the molecular and cellular levels. However, reconstructing the internal 3D distribution of fluorescent targets from surface two-dimensional (2D) fluorescence projection data remains a challenging task. In recent years, deep learning-based FMT reconstruction has received considerable attention, demonstrating superior performance compared to conventional methods, particularly combined with Transformers. Unlike convolutional architectures that emphasize local context, Transformers leverage self-attention mechanisms to excel at capturing long-range dependencies, thereby enhancing FMT reconstruction accuracy. Nevertheless, the quadratic computational complexity of self-attention poses a bottleneck, particularly pertinent in 3D FMT reconstructions. This paper aims to propose a novel Transformer-based FMT reconstruction algorithm that not only delivers high-quality reconstruction accuracy but also maintains excellent performance in efficiency and inference speed. The key design involves introducing a novel Spatial-Channel Paired Attention Module (SC-PAM), which employs a pair of interdependent branches based on spatial and channel attention, thus effectively learn discriminative features in both spatial and channel domains, meanwhile exhibiting linear complexity relative to the input projection size. Furthermore, to facilitate data transmission between the spatial and channel branches, we share the weights of the query and key mapping functions, which provides a complementary paired attention without elevating complexity. Extensive evaluations through numerical simulations and in vivo experiments were performed to validate effectiveness of the proposed model. The results show that our PAH2T-Former method achieves the highest Dice while reducing model parameters and complexity.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"536-545"},"PeriodicalIF":4.2,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143875176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HDD-Net: Haar Dual Domain Network for Ring Artifacts Correction HDD-Net:用于环伪影校正的Haar双域网络
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-04-01 DOI: 10.1109/TCI.2025.3551166
Xuelong Wu;Junsheng Wang;Qingjie Zhao
{"title":"HDD-Net: Haar Dual Domain Network for Ring Artifacts Correction","authors":"Xuelong Wu;Junsheng Wang;Qingjie Zhao","doi":"10.1109/TCI.2025.3551166","DOIUrl":"https://doi.org/10.1109/TCI.2025.3551166","url":null,"abstract":"Ring artifacts are common artifacts in X-ray Computed Tomography (XCT) scans and have a significant impact on subsequent feature/phase extractions due to the small grayscale gradients in XCT volume data of bulk materials. This paper proposes the Haar Dual Domain Network for correcting ring artifacts. By utilizing the Haar wavelet decomposition on images containing ring artifacts in both the image and projection domains, the ring artifacts are preliminarily separated, facilitating their removal by neural networks while preserving microstructure features such as low-contrast phase boundaries. By constructing a feature fusion network, the information from both 2D slices and 3D projection volume data has been fully integrated to eliminate ring artifacts while preserving the edges of every feature. The effectiveness of the Haar wavelet transform and fusion network has been validated by ablation experiments, proving the application of HDD-Net to large volume of XCT data.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"399-409"},"PeriodicalIF":4.2,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143761377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PACformer: A Multi-Stage Heterogeneous Convolutional-Vision Transformer for Sparse-View Photoacoustic Tomography Restoration PACformer:一种用于稀疏视场光声层析成像恢复的多级异构卷积视觉变压器
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-03-31 DOI: 10.1109/TCI.2025.3550716
Li He;Ruitao Chen;Xiangyu Liu;Xu Cao;Shouping Zhu;Yihan Wang
{"title":"PACformer: A Multi-Stage Heterogeneous Convolutional-Vision Transformer for Sparse-View Photoacoustic Tomography Restoration","authors":"Li He;Ruitao Chen;Xiangyu Liu;Xu Cao;Shouping Zhu;Yihan Wang","doi":"10.1109/TCI.2025.3550716","DOIUrl":"https://doi.org/10.1109/TCI.2025.3550716","url":null,"abstract":"Sparse sampling of photoacoustic (PA) signals is a crucial strategy for enhancing the feasibility of photoacoustic tomography (PAT) in clinical settings by reducing system complexity and costs. However, this approach often faces significant artifacts resulting from traditional reconstruction algorithms, underscoring the urgent need for effective solutions. To address the critical challenge of balancing computational efficiency with imaging quality, we introduce PACformer—a novel hybrid model that integrates convolutional neural networks (CNNs) with multi-head self-attentions (MSAs) to improve the reconstruction of sparse-view PAT images. While conventional CNNs excel at local feature extraction, they often struggle to capture long-range dependencies inherent in continuous structures and the diverse artifact patterns present in PAT images. PACformer tackles these limitations through a dual architecture that seamlessly combines MSAs with heterogeneous convolutional layers. Since feature representations differ in size and semantics at various stages of the deep model, PACformer employs specialized blocks for shallow and deep stages. Specifically, it utilizes efficient local convolutions and windowed MSAs for high-resolution feature maps, conditional convolutions (CondConv) integrated with MSAs for advanced feature representation in deeper stages, and Scale-Modulated Convolution combined with CondConv for the bottleneck stage. Experimental results on open-source datasets demonstrate PACformer's superior performance compared to traditional and state-of-the-art networks, validated through ablation studies and attention map visualizations. By effectively modeling both local and global artifacts, PACformer establishes itself as a robust solution for sparse-view PAT reconstruction.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"377-388"},"PeriodicalIF":4.2,"publicationDate":"2025-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143761353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Axial Super-Resolution in Optical Coherence Tomography Images via Spectrum-Based Self-Supervised Training 基于光谱自监督训练的光学相干层析成像轴向超分辨率研究
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-03-31 DOI: 10.1109/TCI.2025.3555134
Zhengyang Xu;Yuting Gao;Xi Chen;Kan Lin;Linbo Liu;Yu-Cheng Chen
{"title":"Axial Super-Resolution in Optical Coherence Tomography Images via Spectrum-Based Self-Supervised Training","authors":"Zhengyang Xu;Yuting Gao;Xi Chen;Kan Lin;Linbo Liu;Yu-Cheng Chen","doi":"10.1109/TCI.2025.3555134","DOIUrl":"https://doi.org/10.1109/TCI.2025.3555134","url":null,"abstract":"High axial resolution in Optical Coherence Tomography (OCT) images is essential for accurately diagnosing skin conditions like psoriasis and keratoderma, where clear boundary delineation can reveal early disease markers. Existing deep learning super-resolution methods typically rely on intensity-based training, which only utilizes magnitude data from the OCT spectrum after Fourier transformation, limiting the reconstruction of fine boundary details. This study introduces a spectrum-based, self-supervised deep learning framework that leverages OCT spectral (fringe) data to improve axial resolution beyond system limits. By training the model directly on fringe data in a self-supervised manner, we achieve finer structural detail recovery. Evaluation metrics included Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM), and axial resolution estimation. Our framework yielded a 50% improvement in axial resolution, achieving 4.28 μm from 7.19 μm, along with PSNR gains of up to 3.37 dB and SSIM increases by 0.157, significantly enhancing boundary continuity and fine detail reconstruction. Our method surpasses intensity-based approaches in enhancing axial resolution and presents potential for iterative application to achieve even greater improvements. Significance: This framework advances OCT imaging, offering a promising, non-invasive tool for dermatological diagnostics.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"497-505"},"PeriodicalIF":4.2,"publicationDate":"2025-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143845346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Quick Unsupervised Hyperspectral Dimensionality Reduction for Earth Observation: A Comparison 对地观测快速无监督高光谱降维:比较
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-03-28 DOI: 10.1109/TCI.2025.3555137
Daniela Lupu;Joseph L. Garrett;Tor Arne Johansen;Milica Orlandic;Ion Necoara
{"title":"Quick Unsupervised Hyperspectral Dimensionality Reduction for Earth Observation: A Comparison","authors":"Daniela Lupu;Joseph L. Garrett;Tor Arne Johansen;Milica Orlandic;Ion Necoara","doi":"10.1109/TCI.2025.3555137","DOIUrl":"https://doi.org/10.1109/TCI.2025.3555137","url":null,"abstract":"Dimensionality reduction can be applied to hyperspectral images so that the most useful data can be extracted and processed more quickly. This is critical in any situation in which data volume exceeds the capacity of the computational resources, particularly in the case of remote sensing platforms (e.g., drones, satellites), but also in the case of multi-year datasets. Moreover, the computational strategies of unsupervised dimensionality reduction often provide the basis for more complicated supervised techniques. In this work, eight unsupervised dimensionality reduction algorithms are tested on hyperspectral data from the HYPSO-1 earth observation satellite. Each particular algorithm is chosen to be representative of a broader collection of methods. Our extensive experiments probe the computational complexity, reconstruction accuracy, signal clarity, sensitivity to artifacts, and effects on target detection and classification of the different algorithms. No algorithm consistently outperformed the others across all tests, but some general trends regarding the characteristics of the algorithms did emerge. With half a million pixels, computational time requirements of the methods varied by 5 orders of magnitude, and the reconstruction error varied by about 3 orders of magnitude. A relationship between mutual information and artifact susceptibility was suggested by the tests. The relative performance of the algorithms differed significantly between the target detection and classification tests. Overall, these experiments both show the power of dimensionality reduction and give guidance regarding how to evaluate a technique prior to incorporating it into a processing pipeline.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"520-535"},"PeriodicalIF":4.2,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143835450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast Adaptive Plug-and-Play ADMM Framework for Short-Range 3-D SAR Imaging 用于近距离三维SAR成像的快速自适应即插即用ADMM框架
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-03-26 DOI: 10.1109/TCI.2025.3573587
The-Hien Pham;Ic-Pyo Hong
{"title":"Fast Adaptive Plug-and-Play ADMM Framework for Short-Range 3-D SAR Imaging","authors":"The-Hien Pham;Ic-Pyo Hong","doi":"10.1109/TCI.2025.3573587","DOIUrl":"https://doi.org/10.1109/TCI.2025.3573587","url":null,"abstract":"The advancement of short-range millimeter-wave (MMW) synthetic aperture radar (SAR) imaging has shown significant advancements in various fields, including security surveillance, non-destructive evaluation, and medical diagnostics. This paper presents a fast adaptive plug-and-play alternating direction method of multipliers (FA-PnP-ADMM) framework designed to improve the efficiency and accuracy of SAR image reconstruction. By addressing key challenges like image degradation caused by fast Fourier transform (FFT) operations and the computational burden of conventional ADMM methods, our framework significantly improves performance. Concretely, alongside a PnP strategy, the proposed FA-PnP-ADMM framework leverages the state-of-the-art single-frequency holographic (SFH) ADMM-based image-solving model and the adaptive parameter adjustment predicated on the relationship between relaxed ADMM and relaxed Douglas-Rachford splitting (DRS). This innovative integration significantly accelerates convergence and reduces computational overhead. Furthermore, the methodology incorporates proficient denoising deep learning (DL) architectures, encompassing convolutional neural network (CNN) and auto-encoder (AE), seamlessly embedded within the iterative process, resulting in a tailored PnP-DL-ADMM. This synergy not only enhances noise suppression and image fidelity but also adapts effectively to diverse scene complexities and noise levels. Unlike previous works that employ these techniques separately, our approach integrates adaptive optimization and DL-based denoisers into a unified framework optimized for short-range 3D SAR imaging. Experimental results demonstrate substantial improvements in both runtime and reconstruction quality, highlighting the practicality and impact of this methodology for real-world applications.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"764-778"},"PeriodicalIF":4.2,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144232128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信