{"title":"Single-Pixel Compressive Terahertz 3D Imaging","authors":"Adolphe Ndagijimana;Iñigo Ederra;Miguel Heredia Conde","doi":"10.1109/TCI.2025.3564161","DOIUrl":"https://doi.org/10.1109/TCI.2025.3564161","url":null,"abstract":"Terahertz (THz) imaging contends with the lack of cost-effective, off-the-shelf high-resolution array detectors and the slow acquisition speeds associated with pixel-by-pixel raster scanning. Single-pixel imaging with Compressive Sensing (CS) represents a potential solution for resolution and acquisition speed in a cost-efficient manner. Our paper introduces a novel approach for extending 2D single-pixel THz imaging systems to 3D using a single frequency. By leveraging the single-pixel approach, we achieve 3D resolution while avoiding mechanical scanning, and the use of a single frequency eliminates the need for bandwidth, a significant limitation of conventional techniques, where design of THz sources and detectors with large bandwidth remains challenging and typically complex. The Order Recursive Matching Pursuit (ORMP) algorithm is used as the sparse recovery method to exploit the sparsity/compressibility of the 3D THz signal and enable sampling at a rate far lower than that required by the Nyquist Theorem. The 2D sensing matrix is obtained by analyzing the diffracted propagation of THz imaging systems on a 2D surface perpendicular to the optical axis. Moreover, the 3D sensing matrix is based on the diffracted propagation of 2D surfaces at different sampling depth positions. Our system can quickly capture the reflective properties of every point in a 3D space using a single-pixel camera setup that leverages CS, making it a simple and efficient method for creating a fast 3D THz imaging system, particularly suited to high-frequency THz sources that operate efficiently at a single frequency or at small bandwidth.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"570-585"},"PeriodicalIF":4.2,"publicationDate":"2025-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10976389","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143918671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Genyuan Zhang;Zihao Wang;Haijun Yu;Song Ni;Haixia Xie;Qiegen Liu;Fenglin Liu;Shaoyu Wang
{"title":"Score-Based Generative Model With Conditional Null-Space Learning for Limited-Angle Tomographic Reconstruction in Medical Imaging","authors":"Genyuan Zhang;Zihao Wang;Haijun Yu;Song Ni;Haixia Xie;Qiegen Liu;Fenglin Liu;Shaoyu Wang","doi":"10.1109/TCI.2025.3562059","DOIUrl":"https://doi.org/10.1109/TCI.2025.3562059","url":null,"abstract":"Limited-angle computed tomography (LA-CT) reconstruction represents a typically ill-posed inverse problem, frequently resulting in reconstructions with noticeable edge divergence and missing features. Score-based generative models (SGMs) based reconstruction methods have shown strong ability to reconstruct high-fidelity images for LA-CT. Data consistency is crucial for generating reliable and high-quality results in SGMs-based reconstruction methods. However, existing deep reconstruction methods have not fully explored data consistency, resulting in suboptimal performance. Based on this, we proposed a Conditional Score-based Null-space (CSN) generative model for LA-CT reconstruction. First, CSN integrates prior physical information of limited-angle scanning as conditional constraint, which can enable SGMs to obtain more accurate generation. Second, in order to balance the consistency and realness of the reconstruction results, the range-null space decomposition strategy is introduced in the sampling process. This strategy ensures that the estimation of the information occurs only in the null-space. Finally, we employ the sparse least square (LSQR) instead of commonly used consistency terms such as simultaneous iterative reconstruction technique (SIRT), thereby achieving superior reconstruction results. In addition, a mathematical convergence analysis of our CSN method is provided. Experimental evaluations on both numerical simulations and real-world datasets demonstrate that the proposed method offers notable advantages in reconstruction quality.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"556-569"},"PeriodicalIF":4.2,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143918784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"PAH2T-Former: Paired-Attention Hybrid Hierarchical Transformer for Synergistically Enhanced FMT Reconstruction Quality and Efficiency","authors":"Peng Zhang;Xingyu Liu;Qianqian Xue;Yu Shang;Chen Liu;Ruhao Chen;Honglei Gao;Jiye Liang;Wenjian Wang;Guanglei Zhang","doi":"10.1109/TCI.2025.3559431","DOIUrl":"https://doi.org/10.1109/TCI.2025.3559431","url":null,"abstract":"Fluorescence molecular tomography (FMT) is a sensitive optical imaging technique that can achieve three-dimensional (3D) tomographic images at the molecular and cellular levels. However, reconstructing the internal 3D distribution of fluorescent targets from surface two-dimensional (2D) fluorescence projection data remains a challenging task. In recent years, deep learning-based FMT reconstruction has received considerable attention, demonstrating superior performance compared to conventional methods, particularly combined with Transformers. Unlike convolutional architectures that emphasize local context, Transformers leverage self-attention mechanisms to excel at capturing long-range dependencies, thereby enhancing FMT reconstruction accuracy. Nevertheless, the quadratic computational complexity of self-attention poses a bottleneck, particularly pertinent in 3D FMT reconstructions. This paper aims to propose a novel Transformer-based FMT reconstruction algorithm that not only delivers high-quality reconstruction accuracy but also maintains excellent performance in efficiency and inference speed. The key design involves introducing a novel Spatial-Channel Paired Attention Module (SC-PAM), which employs a pair of interdependent branches based on spatial and channel attention, thus effectively learn discriminative features in both spatial and channel domains, meanwhile exhibiting linear complexity relative to the input projection size. Furthermore, to facilitate data transmission between the spatial and channel branches, we share the weights of the query and key mapping functions, which provides a complementary paired attention without elevating complexity. Extensive evaluations through numerical simulations and in vivo experiments were performed to validate effectiveness of the proposed model. The results show that our PAH2T-Former method achieves the highest Dice while reducing model parameters and complexity.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"536-545"},"PeriodicalIF":4.2,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143875176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"HDD-Net: Haar Dual Domain Network for Ring Artifacts Correction","authors":"Xuelong Wu;Junsheng Wang;Qingjie Zhao","doi":"10.1109/TCI.2025.3551166","DOIUrl":"https://doi.org/10.1109/TCI.2025.3551166","url":null,"abstract":"Ring artifacts are common artifacts in X-ray Computed Tomography (XCT) scans and have a significant impact on subsequent feature/phase extractions due to the small grayscale gradients in XCT volume data of bulk materials. This paper proposes the Haar Dual Domain Network for correcting ring artifacts. By utilizing the Haar wavelet decomposition on images containing ring artifacts in both the image and projection domains, the ring artifacts are preliminarily separated, facilitating their removal by neural networks while preserving microstructure features such as low-contrast phase boundaries. By constructing a feature fusion network, the information from both 2D slices and 3D projection volume data has been fully integrated to eliminate ring artifacts while preserving the edges of every feature. The effectiveness of the Haar wavelet transform and fusion network has been validated by ablation experiments, proving the application of HDD-Net to large volume of XCT data.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"399-409"},"PeriodicalIF":4.2,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143761377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Li He;Ruitao Chen;Xiangyu Liu;Xu Cao;Shouping Zhu;Yihan Wang
{"title":"PACformer: A Multi-Stage Heterogeneous Convolutional-Vision Transformer for Sparse-View Photoacoustic Tomography Restoration","authors":"Li He;Ruitao Chen;Xiangyu Liu;Xu Cao;Shouping Zhu;Yihan Wang","doi":"10.1109/TCI.2025.3550716","DOIUrl":"https://doi.org/10.1109/TCI.2025.3550716","url":null,"abstract":"Sparse sampling of photoacoustic (PA) signals is a crucial strategy for enhancing the feasibility of photoacoustic tomography (PAT) in clinical settings by reducing system complexity and costs. However, this approach often faces significant artifacts resulting from traditional reconstruction algorithms, underscoring the urgent need for effective solutions. To address the critical challenge of balancing computational efficiency with imaging quality, we introduce PACformer—a novel hybrid model that integrates convolutional neural networks (CNNs) with multi-head self-attentions (MSAs) to improve the reconstruction of sparse-view PAT images. While conventional CNNs excel at local feature extraction, they often struggle to capture long-range dependencies inherent in continuous structures and the diverse artifact patterns present in PAT images. PACformer tackles these limitations through a dual architecture that seamlessly combines MSAs with heterogeneous convolutional layers. Since feature representations differ in size and semantics at various stages of the deep model, PACformer employs specialized blocks for shallow and deep stages. Specifically, it utilizes efficient local convolutions and windowed MSAs for high-resolution feature maps, conditional convolutions (CondConv) integrated with MSAs for advanced feature representation in deeper stages, and Scale-Modulated Convolution combined with CondConv for the bottleneck stage. Experimental results on open-source datasets demonstrate PACformer's superior performance compared to traditional and state-of-the-art networks, validated through ablation studies and attention map visualizations. By effectively modeling both local and global artifacts, PACformer establishes itself as a robust solution for sparse-view PAT reconstruction.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"377-388"},"PeriodicalIF":4.2,"publicationDate":"2025-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143761353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Axial Super-Resolution in Optical Coherence Tomography Images via Spectrum-Based Self-Supervised Training","authors":"Zhengyang Xu;Yuting Gao;Xi Chen;Kan Lin;Linbo Liu;Yu-Cheng Chen","doi":"10.1109/TCI.2025.3555134","DOIUrl":"https://doi.org/10.1109/TCI.2025.3555134","url":null,"abstract":"High axial resolution in Optical Coherence Tomography (OCT) images is essential for accurately diagnosing skin conditions like psoriasis and keratoderma, where clear boundary delineation can reveal early disease markers. Existing deep learning super-resolution methods typically rely on intensity-based training, which only utilizes magnitude data from the OCT spectrum after Fourier transformation, limiting the reconstruction of fine boundary details. This study introduces a spectrum-based, self-supervised deep learning framework that leverages OCT spectral (fringe) data to improve axial resolution beyond system limits. By training the model directly on fringe data in a self-supervised manner, we achieve finer structural detail recovery. Evaluation metrics included Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM), and axial resolution estimation. Our framework yielded a 50% improvement in axial resolution, achieving 4.28 μm from 7.19 μm, along with PSNR gains of up to 3.37 dB and SSIM increases by 0.157, significantly enhancing boundary continuity and fine detail reconstruction. Our method surpasses intensity-based approaches in enhancing axial resolution and presents potential for iterative application to achieve even greater improvements. Significance: This framework advances OCT imaging, offering a promising, non-invasive tool for dermatological diagnostics.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"497-505"},"PeriodicalIF":4.2,"publicationDate":"2025-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143845346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daniela Lupu;Joseph L. Garrett;Tor Arne Johansen;Milica Orlandic;Ion Necoara
{"title":"Quick Unsupervised Hyperspectral Dimensionality Reduction for Earth Observation: A Comparison","authors":"Daniela Lupu;Joseph L. Garrett;Tor Arne Johansen;Milica Orlandic;Ion Necoara","doi":"10.1109/TCI.2025.3555137","DOIUrl":"https://doi.org/10.1109/TCI.2025.3555137","url":null,"abstract":"Dimensionality reduction can be applied to hyperspectral images so that the most useful data can be extracted and processed more quickly. This is critical in any situation in which data volume exceeds the capacity of the computational resources, particularly in the case of remote sensing platforms (e.g., drones, satellites), but also in the case of multi-year datasets. Moreover, the computational strategies of unsupervised dimensionality reduction often provide the basis for more complicated supervised techniques. In this work, eight unsupervised dimensionality reduction algorithms are tested on hyperspectral data from the HYPSO-1 earth observation satellite. Each particular algorithm is chosen to be representative of a broader collection of methods. Our extensive experiments probe the computational complexity, reconstruction accuracy, signal clarity, sensitivity to artifacts, and effects on target detection and classification of the different algorithms. No algorithm consistently outperformed the others across all tests, but some general trends regarding the characteristics of the algorithms did emerge. With half a million pixels, computational time requirements of the methods varied by 5 orders of magnitude, and the reconstruction error varied by about 3 orders of magnitude. A relationship between mutual information and artifact susceptibility was suggested by the tests. The relative performance of the algorithms differed significantly between the target detection and classification tests. Overall, these experiments both show the power of dimensionality reduction and give guidance regarding how to evaluate a technique prior to incorporating it into a processing pipeline.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"520-535"},"PeriodicalIF":4.2,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143835450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Unsupervised Low-Dose CT Reconstruction With One-Way Conditional Normalizing Flows","authors":"Ran An;Ke Chen;Hongwei Li","doi":"10.1109/TCI.2025.3553039","DOIUrl":"https://doi.org/10.1109/TCI.2025.3553039","url":null,"abstract":"Deep-learning techniques have demonstrated significant potential in low-dose computed tomography (LDCT) reconstruction. Nevertheless, supervised methods are limited by the scarcity of labeled data in clinical scenarios, while CNN-based unsupervised denoising methods often result in excessive smoothing of reconstructed images. Although normalizing flows (NFs) based methods have shown promise in generating detail-rich images and avoiding over-smoothing, they face two key challenges: (1) Existing two-way transformation strategies between noisy images and latent variables, despite leveraging the regularization and generation capabilities of NFs, can lead to detail loss and secondary artifacts; and (2) Training NFs on high-resolution CT images is computationally intensive. While conditional normalizing flows (CNFs) can mitigate computational costs by learning conditional probabilities, current methods rely on labeled data for conditionalization, leaving unsupervised CNF-based LDCT reconstruction an unresolved challenge. To address these issues, we propose a novel unsupervised LDCT iterative reconstruction algorithm based on CNFs. Our approach implements a strict one-way transformation during alternating optimization in the dual spaces, effectively preventing detail loss and secondary artifacts. Additionally, we propose an unsupervised conditionalization strategy, enabling efficient training of CNFs on high-resolution CT images and achieving fast, high-quality unsupervised reconstruction. Experimental results across multiple datasets demonstrate that the proposed method outperforms several state-of-the-art unsupervised methods and even rivals some supervised approaches.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"485-496"},"PeriodicalIF":4.2,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143792929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Wavenumber Domain 2-D Separable Data Reformatting Algorithm for High Squint Spotlight SAR","authors":"Qianyu Deng;Yan Jiang;Xinhua Mao","doi":"10.1109/TCI.2025.3551164","DOIUrl":"https://doi.org/10.1109/TCI.2025.3551164","url":null,"abstract":"In the case of high squint spotlight synthetic aperture radar (SAR), if a fixed receive-window is used, the signal is distorted in time domain and frequency domain, leading to a significant amount of redundant data. To improve sampling efficiency, this paper adopts a sliding receive-window for signal sampling. However, using a sliding receive-window introduces 2-D coupling, necessitating 2-D interpolation for decoupling. To achieve efficient and accurate decoupling, this paper proposes a wavenumber domain 2-D separable data reformatting algorithm, which simplifies the 2-D interpolation into two separable 1-D interpolations. The new algorithm proposed in this paper can not only solve the problem of low sampling efficiency in the frequency domain caused by the distortion of the 2-D spectrum in high squint mode, but also improve the processing efficiency of eliminating the 2-D coupling in sliding receive-window mode. The effectiveness of the proposed algorithm is verified by point target simulations and real data processing.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"468-484"},"PeriodicalIF":4.2,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143792919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Structured Illumination Microscopy With Uncertainty-Guided Deep Learning","authors":"Xuyang Chang;Xiaoqin Zhu;Yibo Feng;Zhenyue Chen;Liheng Bian","doi":"10.1109/TCI.2025.3550715","DOIUrl":"https://doi.org/10.1109/TCI.2025.3550715","url":null,"abstract":"Super-resolution microscopy enables the visualization of subcellular structures with unprecedented detail, significantly advancing life sciences. Among the various techniques available, structured illumination microscopy (SIM) provides an ideal balance of speed, resolution, and phototoxicity. Recent advancements in deep learning have further enhanced SIM capabilities, achieving improved imaging quality with higher signal-to-noise ratios and fewer measurements. However, the opaque nature of these deep learning models complicates the quantification of uncertainty in their outputs, which may lead to visually appealing but scientifically inaccurate results, particularly challenging for clinical diagnostics. In this paper, we introduce a two-step strategy that not only quantifies the uncertainty of deep learning models but also enhances super-resolution reconstruction. The first step implements a novel sparse-constrained loss function, incorporating Jeffrey's prior, to accurately predict uncertainty maps. These maps assess the confidence levels of the network's predictions and identify potential inaccuracies. In the second step, these predicted uncertainty maps serve as an attention mechanism, directing the neural network's focus towards areas of high uncertainty to improve the reconstruction of high-frequency details and textures. A series of simulations and experiments confirm that our method accurately quantifies uncertainty and improves high-resolution image reconstruction, increasing the peak signal-to-noise ratio by an average of 1.7 dB and structural similarity by 0.06, compared to traditional methods on mitochondrial and microtubule datasets. Our approach holds promise for advancing the application of deep learning-based super-resolution microscopy in clinical settings.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"389-398"},"PeriodicalIF":4.2,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143761376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}