Jannes Merckx;Arnold J. den Dekker;Jan Sijbers;Jan De Beenhouwer
{"title":"DAMMER: Direct Adaptive Multi-Resolution MEsh Reconstruction From X-Ray Measurements","authors":"Jannes Merckx;Arnold J. den Dekker;Jan Sijbers;Jan De Beenhouwer","doi":"10.1109/TCI.2025.3587408","DOIUrl":"https://doi.org/10.1109/TCI.2025.3587408","url":null,"abstract":"X-ray computed tomography (XCT) reconstructs a scanned object using measured projection data, with the object typically represented on a voxel grid during the reconstruction process. However, since material interfaces typically do not align with voxel boundaries, a voxel representation inherently suffers from partial volume effects. This paper presents DAMMER: a method that reconstructs a multi-resolution triangle mesh to represent the attenuation values of a piecewise homogeneous object, often encountered in industrial CT, based on X-ray projection data of this object. DAMMER progressively reconstructs this mesh to match the object. For this, different homogeneous segments are created based on an agglomerative hierarchical clustering procedure, which targets a compact object description by optimizing a weighted sum of projection difference and number of edges between different segments. These segments are progressively optimized to match the homogeneous materials in the object. Simulation and real data experiments show that DAMMER generates significantly more accurate reconstructions compared to pixel grid reconstruction methods, outperforming conventional voxel-based methods in capturing the true geometry of complex material boundaries.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"926-941"},"PeriodicalIF":4.2,"publicationDate":"2025-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144671281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Pattern Research on Sparse Camera Array for Super-Resolution Imaging","authors":"Tianren Li;Yu Zhang;Tengda Huang;Naishu Jia;Fulin Liu;Yufu Qu;Zhenzhong Wei","doi":"10.1109/TCI.2025.3588025","DOIUrl":"https://doi.org/10.1109/TCI.2025.3588025","url":null,"abstract":"This study addresses the quantitative pattern design of sparse camera arrays in multi-frame super-resolution (SR) imaging systems. We propose the Weighted Ratio of Repeated Spectra (WRRS) metric to predict the super-resolution performance of sparse camera arrays. The WRRS consists of the weight determined by the energy distribution of the imaging scene, and the ratio of repeated spectra intensity determined by array patterns. Our method achieves the optimization of sparse camera array pattern under certain scene without building an actual imaging system. The real image experiment shows that the WRRS is correlated with the median of the SR image quality distribution. Subsequently, a six-camera sparse camera array is designed for aircraft scene with varying object distance. Compared with current qualitatively and quantitatively designed patterns, the proposed sparse camera array pattern that specifically designed for aircraft achieves better average PSNR in the aircraft scene.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"992-1004"},"PeriodicalIF":4.2,"publicationDate":"2025-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144695615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhuang He;Hai-Miao Hu;Likun Gao;Haoxin Hu;Xinhui Xue;Zhenglin Tang;Difeng Zhu;Haowen Zheng;Chongze Wang
{"title":"Vignetting Correction Through Color-Intensity Map Entropy Optimization","authors":"Zhuang He;Hai-Miao Hu;Likun Gao;Haoxin Hu;Xinhui Xue;Zhenglin Tang;Difeng Zhu;Haowen Zheng;Chongze Wang","doi":"10.1109/TCI.2025.3583465","DOIUrl":"https://doi.org/10.1109/TCI.2025.3583465","url":null,"abstract":"Vignetting correction is an essential process of image signal processing. It is an important part for obtaining high-quality images, but the research in this field has not been fully emphasized. The mainstream methods are based on calibration which processes are complex. And many methods get low accuracy and poor robustness in practical. In this paper, we analyzed the optical principle of vignetting and its influence on the image. Then, we proposed an algorithm based on color-intensity map entropy optimization to correct image vignetting. Moreover, because of the lack of dataset of vignetting, we proposed a method for constructing vignetting image dataset through capturing the real scenes. Compared with the dataset generated through simulation, our dataset is more authentic and reliable. Many experiments have been carried out on this dataset, and the results proved that the proposed algorithm achieved the best performance.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"911-925"},"PeriodicalIF":4.2,"publicationDate":"2025-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144623935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"G2L-Stereo: Global to Local Two-Stage Real-Time Stereo Matching Network","authors":"Jie Tang;Gaofeng Peng;Jialu Liu;Bo Yu","doi":"10.1109/TCI.2025.3581105","DOIUrl":"https://doi.org/10.1109/TCI.2025.3581105","url":null,"abstract":"Developing fast and accurate stereo matching algorithms is crucial for real-world embedded vision applications. Depth information plays a significant role in scene understanding, and depth calculated through stereo matching is generally considered to be more precise and reliable than that obtained from monocular depth estimation. However, speed-oriented stereo matching methods often suffer from poor feature representation due to sparse sampling and detail loss caused by unreasonable disparity allocation during upsampling. To address these issues, we propose G2L-Stereo, a two-stage real-time stereo matching network that combines global disparity range prediction and local disparity range prediction. In the global disparity range prediction stage, we introduce feature-guided connections for cost aggregation, enhancing the expressive power of sparse features by aligning the feature space across different scales of cost volumes. We also incorporate confidence estimation into the upsampling algorithm to reduce the propagation of inaccurate disparities during upsampling, yielding more precise disparity maps. In the local disparity range prediction stage, we develop a disparity refinement module guided by neighborhood similarity. This module aggregates similar neighboring costs to estimate disparity residuals and refine disparities, restoring lost details in the low-resolution disparity map and further enhancing disparity accuracy. Extensive experiments on the SceneFlow and KITTI datasets validate the effectiveness of our model, showing that G2L-Stereo achieves fast inference while maintaining accuracy comparable to state-of-the-art methods.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"852-863"},"PeriodicalIF":4.2,"publicationDate":"2025-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144524418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Energy-Coded Spectral CT Imaging Method Based on Projection Mix Separation","authors":"Xiaojie Zhao;Yihong Li;Yan Han;Ping Chen;Jiaotong Wei","doi":"10.1109/TCI.2025.3578762","DOIUrl":"https://doi.org/10.1109/TCI.2025.3578762","url":null,"abstract":"Spectral CT can be used to perform material decomposition from polychromatic attenuation data, generate virtual monochromatic or virtual narrow-energy-width images in which beam hardening artifacts are suppressed, and provide detailed energy attenuation coefficients for material characterization. We propose an energy-coded spectral CT imaging method that is based on projection mix separation, which enables simultaneous energy decoding and image reconstruction. An X-ray energy-coded forward model is then constructed. Leveraging the Poisson statistical properties of the measurement data, we formulate a constrained optimization problem for both the energy-coded coefficient matrix and the material decomposition coefficient matrix, which is solved using a block coordinate descent algorithm. Simulations and experimental results demonstrate that the decoded energy spectrum distribution and virtual narrow-energy-width CT images are accurate and effective. The proposed method suppresses beam hardening artifacts and enhances the material identification capabilities of traditional CT.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"839-851"},"PeriodicalIF":4.2,"publicationDate":"2025-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144492200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Iterative Collaboration Network Guided by Reconstruction Prior for Medical Image Super-Resolution","authors":"Xiaoyan Kui;Zexin Ji;Beiji Zou;Yang Li;Yulan Dai;Liming Chen;Pierre Vera;Su Ruan","doi":"10.1109/TCI.2025.3577340","DOIUrl":"https://doi.org/10.1109/TCI.2025.3577340","url":null,"abstract":"High-resolution medical images can provide more detailed information for better diagnosis. Conventional medical image super-resolution relies on a single task which first performs the extraction of the features and then upscaling based on the features. The features extracted may not be complete for super-resolution. Recent multi-task learning, including reconstruction and super-resolution, is a good solution to obtain additional relevant information. The interaction between the two tasks is often insufficient, which still leads to incomplete and less relevant deep features. To address above limitations, we propose an iterative collaboration network (ICONet) to improve communications between tasks by progressively incorporating reconstruction prior to the super-resolution learning procedure in an iterative collaboration way. It consists of a reconstruction branch, a super-resolution branch, and a SR-Rec fusion module. The reconstruction branch generates the artifact-free image as prior, which is followed by a super-resolution branch for prior knowledge-guided super-resolution. Unlike the widely-used convolutional neural networks for extracting local features and Transformers with quadratic computational complexity for modeling long-range dependencies, we develop a new residual spatial-channel feature learning (RSCFL) module of two branches to efficiently establish feature relationships in spatial and channel dimensions. Moreover, the designed SR-Rec fusion module fuses the reconstruction prior and super-resolution features with each other in an adaptive manner. Our ICONet is built with multi-stage models to iteratively upscale the low-resolution images using steps of <inline-formula> <tex-math>${2 times }$</tex-math></inline-formula> and simultaneously interact between two branches in multi-stage supervisions. Quantitative and qualitative experimental results on the benchmarking dataset show that our ICONet outperforms most state-of-the-art approaches.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"827-838"},"PeriodicalIF":4.2,"publicationDate":"2025-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144336075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"$L^{2}$FMamba: Lightweight Light Field Image Super-Resolution With State Space Model","authors":"Zeqiang Wei;Kai Jin;Zeyi Hou;Kuan Song;Xiuzhuang Zhou","doi":"10.1109/TCI.2025.3577338","DOIUrl":"https://doi.org/10.1109/TCI.2025.3577338","url":null,"abstract":"Transformers bring significantly improved performance to the light field image super-resolution task due to their long-range dependency modeling capability. However, the inherently high computational complexity of their core self-attention mechanism has increasingly hindered their advancement in this task. To address this issue, we first introduce the LF-VSSM block, a novel module inspired by progressive feature extraction, to efficiently capture critical long-range spatial-angular dependencies in light field images. LF-VSSM successively extracts spatial features within sub-aperture images, spatial-angular features between sub-aperture images, and spatial-angular features between light field image pixels. On this basis, we propose a lightweight network, <inline-formula><tex-math>$L^{2}$</tex-math></inline-formula>FMamba (Lightweight Light Field Mamba), which integrates the LF-VSSM block to leverage light field features for super-resolution tasks while overcoming the computational challenges of Transformer-based approaches. Extensive experiments on multiple light field datasets demonstrate that our method reduces the number of parameters and complexity while achieving superior super-resolution performance with faster inference speed.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"816-826"},"PeriodicalIF":4.2,"publicationDate":"2025-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144323022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jian Song;Fatemeh Pourahmadian;Todd W. Murray;Venkatalakshmi V. Narumanchi
{"title":"Laser Ultrasonic Imaging Via the Time Domain Linear Sampling Method","authors":"Jian Song;Fatemeh Pourahmadian;Todd W. Murray;Venkatalakshmi V. Narumanchi","doi":"10.1109/TCI.2025.3577405","DOIUrl":"https://doi.org/10.1109/TCI.2025.3577405","url":null,"abstract":"This study investigates the imaging ability of the time-domain linear sampling method (TLSM) when applied to laser ultrasonic (LU) tomography of subsurface defects from limited-aperture measurements. In this vein, the TLSM indicator and its spectral counterpart known as the multifrequency LSM are formulated within the context of LU testing. The affiliated imaging functionals are then computed using synthetic and experimental data germane to LU inspection of aluminum alloy specimens with manufactured defects. Hyperparameters of inversion are computationally analyzed. We demonstrate using synthetic data that the TLSM indicator has the unique ability to recover weak (or hard-to-reach) scatterers and has the potential to generate higher quality images compared to LSM. Provided high-SNR measurements, this advantage may be preserved in reconstructions from LU test data.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"803-815"},"PeriodicalIF":4.2,"publicationDate":"2025-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144323023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robust Correspondence Imaging Against Random Disturbances With Single-Pixel Detection","authors":"Zhihan Xu;Yin Xiao;Wen Chen","doi":"10.1109/TCI.2025.3577334","DOIUrl":"https://doi.org/10.1109/TCI.2025.3577334","url":null,"abstract":"Random disturbance has become a great challenge for correspondence imaging (CI) due to dynamic and nonlinear scaling factors. In this paper, we propose a robust CI against random disturbances for high-quality object reconstruction. To remove the effect of dynamic scaling factors induced by random disturbance, a wavelet and total variation (WATV) algorithm is developed to estimate a series of varying thresholds. Then, light intensities collected by a single-pixel detector are processed by using the series of estimated varying thresholds. To realize high-quality object reconstruction, the binarized light intensities and a series of random patterns are fed into a plug-and-play priors (PnP) algorithm with an iteration framework and a general denoiser, called as CI-PnP. Theoretical descriptions are given in detail to reveal the formation mechanism in CI under random disturbance. Optical measurements are conducted to verify robustness of the proposed CI against random disturbances. It is demonstrated that the proposed method can remove the effect of dynamic scaling factors induced by random disturbance, and can realize high-quality object reconstruction. The proposed method provides a promising solution to achieving ultra-high robustness against random disturbances in CI, and is promising in various applications.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"901-910"},"PeriodicalIF":4.2,"publicationDate":"2025-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144581738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Han Yue;Jun Cheng;Yu-Xuan Ren;Chien-Chun Chen;Grant A. van Riessen;Philip Heng Wai Leong;Steve Feng Shu
{"title":"A Physics-Inspired Deep Learning Framework With Polar Coordinate Attention for Ptychographic Imaging","authors":"Han Yue;Jun Cheng;Yu-Xuan Ren;Chien-Chun Chen;Grant A. van Riessen;Philip Heng Wai Leong;Steve Feng Shu","doi":"10.1109/TCI.2025.3572250","DOIUrl":"https://doi.org/10.1109/TCI.2025.3572250","url":null,"abstract":"Ptychographic imaging confronts inherent challenges in applying deep learning for phase retrieval from diffraction patterns. Conventional neural architectures, both convolutional neural networks and Transformer-based methods, are optimized for natural images with Euclidean spatial neighborhood-based inductive biases that exhibit geometric mismatch with the concentric coherent patterns characteristic of diffraction data in reciprocal space. In this paper, we present PPN, a physics-inspired deep learning network with Polar Coordinate Attention (PoCA) for ptychographic imaging, that aligns neural inductive biases with diffraction physics through a dual-branch architecture separating local feature extraction from non-local coherence modeling. It consists of a PoCA mechanism that replaces Euclidean spatial priors with physically consistent radial-angular correlations. PPN outperforms existing end-to-end models, with spectral and spatial analysis confirming its greater preservation of high-frequency details. Notably, PPN maintains robust performance compared to iterative methods even at low overlap ratios — well-suited for high-throughput imaging in real-world acquisition scenarios for samples with consistent structural characteristics.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"888-900"},"PeriodicalIF":4.2,"publicationDate":"2025-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11027575","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144557938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}