IEEE Transactions on Computational Imaging最新文献

筛选
英文 中文
Single-View Fluorescence Molecular Tomography Based on Hyperspectral NIR-II Imaging 基于高光谱NIR-II成像的单视点荧光分子层析成像
IF 4.8 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-08-25 DOI: 10.1109/TCI.2025.3602315
Yunfei Li;Qian Liu;Fuhong Cai
{"title":"Single-View Fluorescence Molecular Tomography Based on Hyperspectral NIR-II Imaging","authors":"Yunfei Li;Qian Liu;Fuhong Cai","doi":"10.1109/TCI.2025.3602315","DOIUrl":"https://doi.org/10.1109/TCI.2025.3602315","url":null,"abstract":"Biological tissue optics has garnered significant attention in biomedical research for its non-destructive, high-sensitivity nature. However, the scattering and absorption properties of biological tissues fundamentally limit the penetration depth of optical imaging. Fluorescence molecular tomography (FMT) offers a solution balancing imaging depth and resolution, yet tissue scattering and absorption continue to challenge depth-resolved reconstruction accuracy. This study develops a sensitive near-infrared II (NIR-II) hyperspectral imaging system to investigate the relationship between fluorescence penetration depth and tissue absorption/scattering coefficients. By leveraging the strong water absorption peak around 1450 nm, we strategically divide the reconstruction object into layers within the FMT model, significantly improving the ill-posed inverse problem. We then utilize hyperspectral data to select wavelengths with progressively decreasing absorption coefficients relative to the 1450 nm peak. This enables layer-by-layer 3D reconstruction of deep biological tissues, overcoming the limitations of conventional FMT. Our method demonstrates single-perspective FMT reconstruction capable of resolving heterogeneous targets at 10 mm depth with a 0.74 Dice coefficient in depth discrimination. This spectraldimension-enhanced FMT method enables accurate 3D reconstruction from single-view measurements. By exploiting the depth-dependent light-tissue interactions at selected NIR-II wavelengths, our approach achieves imaging quality comparable to multi-angle systems while simplifying the experimental setup. Both simulation and phantom experiments demonstrate precise target localization and shape recovery, suggesting promising potential for small animal imaging applications where system complexity and acquisition speed are critical.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"1161-1173"},"PeriodicalIF":4.8,"publicationDate":"2025-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144998033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards Multi-Source Illumination Color Constancy Through Physics-Based Rendering and Spectral Power Distribution Embedding 基于物理渲染和光谱功率分布嵌入的多光源照明色彩恒常性研究
IF 4.8 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-08-20 DOI: 10.1109/TCI.2025.3598440
Xinhui Xue;Hai-Miao Hu;Zhuang He;Haowen Zheng
{"title":"Towards Multi-Source Illumination Color Constancy Through Physics-Based Rendering and Spectral Power Distribution Embedding","authors":"Xinhui Xue;Hai-Miao Hu;Zhuang He;Haowen Zheng","doi":"10.1109/TCI.2025.3598440","DOIUrl":"https://doi.org/10.1109/TCI.2025.3598440","url":null,"abstract":"Color constancy seeks to keep the perceived color of objects consistent under varying illumination conditions. However, existing methods often rely on restrictive prior assumptions or suffer from limited generalization capability, posing significant challenges in complex scenes with multiple light sources. In this paper, we propose a neural network-enhanced, physics-based approach to multi-illuminant color constancy that leverages spectral imaging—highly sensitive to illumination variation. First, we analyze the physical image-formation process under mixed lighting and introduce a master–subordinate illumination model, extending conventional correlated-color-temperature re-illumination techniques. Our neural network framework explicitly models the correlation between narrow-band spectral reflectance and the spectral power distribution (SPD) of the illumination, enabling accurate recovery of the scene light’s full SPD. Using this model, we fuse RGB images with the estimated illumination spectra to predict illuminant chromaticity precisely, then correct image colors to a standard reference light. Extensive experiments on synthetic multi–color-temperature datasets and real-world spectral datasets demonstrate that our neural network-based method achieves state-of-the-art accuracy in spectral estimation and color-constancy correction.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"1349-1360"},"PeriodicalIF":4.8,"publicationDate":"2025-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145255921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Video Super-Resolution: Spatiotemporal Fusion for Sparse Camera Array 多视频超分辨率:稀疏相机阵列的时空融合
IF 4.8 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-08-18 DOI: 10.1109/TCI.2025.3599774
Xudong Liu;Tianren Li;Yu Zhang;Yufu Qu;Zhenzhong Wei
{"title":"Multi-Video Super-Resolution: Spatiotemporal Fusion for Sparse Camera Array","authors":"Xudong Liu;Tianren Li;Yu Zhang;Yufu Qu;Zhenzhong Wei","doi":"10.1109/TCI.2025.3599774","DOIUrl":"https://doi.org/10.1109/TCI.2025.3599774","url":null,"abstract":"A sparse camera array captures multiple images of a scene within the same spatial plane, enabling super-resolution reconstruction. However, existing methods often fail to fully exploit time as an additional dimension for enhanced information acquisition. Even when temporal and spatial observations are collected simultaneously, their individual contributions are often conflated. Analysis of the system’s imaging model reveals that the spatiotemporal camera system, integrating a camera array with video sequences, holds greater potential for degradation recovery. Based on these insights, we propose a novel multi-video super-resolution network for spatiotemporal information fusion. Guided by explicit physical dimensional orientation, the network effectively integrates spatial information and propagates it along the temporal dimension. By utilizing diverse and informative spatiotemporal sampling, our method more readily addresses challenges arising from ill-posed mapping matrices during reconstruction. Experimental results on both synthetic and real-world datasets show that the components of our network, with information fully propagated and spatiotemporally fused, work synergistically to enhance super-resolution performance, providing substantial improvements over state-of-the-art methods. We believe our study can inspire innovations for future super-resolution tasks by optimizing information acquisition and utilization.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"1087-1098"},"PeriodicalIF":4.8,"publicationDate":"2025-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144909295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust Cardiac Cine MRI Reconstruction With Spatiotemporal Diffusion Model 基于时空扩散模型的鲁棒心脏MRI重构
IF 4.8 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-08-13 DOI: 10.1109/TCI.2025.3598421
Zi Wang;Jiahao Huang;Mingkai Huang;Chengyan Wang;Guang Yang;Xiaobo Qu
{"title":"Robust Cardiac Cine MRI Reconstruction With Spatiotemporal Diffusion Model","authors":"Zi Wang;Jiahao Huang;Mingkai Huang;Chengyan Wang;Guang Yang;Xiaobo Qu","doi":"10.1109/TCI.2025.3598421","DOIUrl":"https://doi.org/10.1109/TCI.2025.3598421","url":null,"abstract":"Accelerated dynamic magnetic resonance imaging (MRI) is highly expected in clinical applications. However, its reconstruction remains challenging due to the inherently high dimensionality and spatiotemporal complexity. While diffusion models have demonstrated robust performance in spatial imaging, their application to spatiotemporal data has been underexplored. To address this gap, we propose a novel spatiotemporal diffusion model (STDM) specifically designed for robust dynamic MRI reconstruction. Our approach decomposes the complex 3D diffusion process into manageable sub-problems by focusing on 2D spatiotemporal images, thereby reducing dimensionality and enhancing computational efficiency. Each 2D image is treated independently, allowing for a parallel reverse diffusion process guided by data consistency to ensure measurement alignment. To further improve the image quality, we introduce a dual-directional diffusion framework (dSTDM), which simultaneously performs reverse diffusion along two orthogonal directions, effectively capturing the full 3D data distribution. Comprehensive experiments on cardiac cine MRI datasets demonstrate that our approach achieves state-of-the-art performance in highly accelerated reconstruction. Additionally, it exhibits preliminary robustness across various undersampling scenarios and unseen datasets, including patient data, non-Cartesian radial sampling, and different anatomies.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"1258-1270"},"PeriodicalIF":4.8,"publicationDate":"2025-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145090149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Double-Branched and Multi-Magnetic Directions Feature Fusion Network (DB&MDF2-Net) for the Accurate Reconstruction of Magnetic Particle Imaging 用于磁粒子成像精确重建的双分支多磁方向特征融合网络(DB&MDF2-Net)
IF 4.8 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-08-13 DOI: 10.1109/TCI.2025.3598455
Jintao Li;Lizhi Zhang;Shuangchen Li;Huanlong Gao;Shuaishuai He;Yizhe Zhao;Xiaowei He;Yuqing Hou;Hongbo Guo
{"title":"Double-Branched and Multi-Magnetic Directions Feature Fusion Network (DB&MDF2-Net) for the Accurate Reconstruction of Magnetic Particle Imaging","authors":"Jintao Li;Lizhi Zhang;Shuangchen Li;Huanlong Gao;Shuaishuai He;Yizhe Zhao;Xiaowei He;Yuqing Hou;Hongbo Guo","doi":"10.1109/TCI.2025.3598455","DOIUrl":"https://doi.org/10.1109/TCI.2025.3598455","url":null,"abstract":"<italic>Objective:</i> Magnetic particle imaging (MPI) is a novel non-destructive medical imaging method that visualizes the spatial distribution of superparamagnetic iron oxide nanoparticles. However, due to the non-uniformity of the selection and drive field, the unsatisfactory of the receive coil and the different components of the magnetization signal (induced electromotive force) detected by the orthogonal coil, processing the voltage signals measured by the receiving coils in different directions without discrimination will affect the reconstruction quality. <italic>Methods:</i> This study introduces the Double-Branched and Multi-Magnetic Directions Feature Fusion Network (DB&MDF2-Net) to address these challenges. The dual-branch(DB) strategy processes X and Y-directional magnetic field components independently, reducing information confusion. Each branch has a dual-sampling feature(DSF) layer that captures multi-scale spatial information and preserves spatial structure, enhancing the extraction of particle distribution and edge details. Additionally, a multi-head self-attention transformer(MSA-T) layer efficiently integrates features from different modules, allowing the network to learn complex inter-feature relationships. <italic>Results:</i> The effectiveness of the DB strategy, DSF and MSA-T layers in our proposed method were validated through ablation experiments. Simulate and phantom experiments further demonstrate significant improvements in detail capture and anti-noise capability of DB&MDF2-Net without any hardware modifications, enabling more precise restoration of real particle distribution characteristics. <italic>Conclusion:</i> These findings suggest that DB&MDF2-Net can significantly improve the imaging accuracy of MPI. <italic>Significance:</i> This research is expected to enhance the practicality of MPI in biomedical applications and contribute to the future development of MPI technology.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"1074-1086"},"PeriodicalIF":4.8,"publicationDate":"2025-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144918326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HDN: Hybrid Deep-Learning and Non-Line-of-Sight Reconstruction Framework for Transcranial Photoacoustic Imaging of Human Brain HDN:基于深度学习和非视线重建的经颅人脑光声成像框架
IF 4.8 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-08-11 DOI: 10.1109/TCI.2025.3594073
Pengcheng Wan;Fan Zhang;Yuting Shen;Hulin Zhao;Xiran Cai;Xiaohua Feng;Fei Gao
{"title":"HDN: Hybrid Deep-Learning and Non-Line-of-Sight Reconstruction Framework for Transcranial Photoacoustic Imaging of Human Brain","authors":"Pengcheng Wan;Fan Zhang;Yuting Shen;Hulin Zhao;Xiran Cai;Xiaohua Feng;Fei Gao","doi":"10.1109/TCI.2025.3594073","DOIUrl":"https://doi.org/10.1109/TCI.2025.3594073","url":null,"abstract":"Photoacoustic imaging combines the high contrast of optical imaging with the deep penetration depth of ultrasonic imaging, showing great potential in cerebrovascular disease detection. However, the ultrasonic wave suffers strong attenuation and multi-scattering when it passes through the skull tissue, resulting in the distortion of the collected photoacoustic signal. In this paper, inspired by the principles of deep learning and non-line-of-sight imaging, we propose an image reconstruction framework named HDN (Hybrid Deep-learning and Non-line-of-sight), which consists of the signal extraction part and difference utilization part. The signal extraction part is used to correct the distorted signal and reconstruct an initial image. The difference utilization part is used to make further use of the signal difference between the distorted signal and corrected signal, reconstructing the residual image between the initial image and the target image. The test results on a photoacoustic digital brain simulation dataset show that compared with the traditional method (delay-and-sum) and deep-learning-based method (UNet), the HDN achieved superior performance in both signal correction and image reconstruction. Specifically for the structural similarity index, the HDN reached 0.661 in imaging results, compared to 0.157 for the delay-and-sum method and 0.305 for the deep-learning-based method.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"1142-1149"},"PeriodicalIF":4.8,"publicationDate":"2025-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144918327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Helical Reconstruction Network for Multi-Source Static CT 多源静态CT螺旋重建网络
IF 4.8 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-08-11 DOI: 10.1109/TCI.2025.3597449
Chunliang Ma;Kaiwen Tan;Yunxiang Li;Shouhua Luo
{"title":"A Helical Reconstruction Network for Multi-Source Static CT","authors":"Chunliang Ma;Kaiwen Tan;Yunxiang Li;Shouhua Luo","doi":"10.1109/TCI.2025.3597449","DOIUrl":"https://doi.org/10.1109/TCI.2025.3597449","url":null,"abstract":"Nanovision static CT is an innovative CT scanning technique that features the arrangement of the X-ray source array and detector array on two parallel planes with a consistent offset. This configuration significantly enhances temporal resolution compared to conventional CT, providing particular advantages for dynamic organ imaging and low-dose imaging applications. However, it also introduces cone angle and sparse angle artifacts during helical scanning. To address this, this paper proposes a novel theoretical analysis framework to systematically analyze the artifact generation mechanism of the traditional FDK algorithm in this scenario. Through numerical solutions and data superposition, we are able to attribute the causes of artifacts for the first time to two types of data incompleteness issues arising from the lack of cone angle data and insufficient sparse angular sampling. Building on these insights, we propose an innovative dual-module collaborative reconstruction network. First, we introduce the Helical Bi-directional xFDK algorithm (HbixFDK), which employs a limited-angle weighted compensation strategy to mitigate data incompleteness in the cone angle region. Next, we develop the attention-based Helical FISTA network (HFISTA-Net), which utilizes the output from HbixFDK as the initial reconstruction to effectively suppress sparse sampling artifacts. Extensive experiments conducted on the TCIA dataset and clinical static CT scans demonstrate that our proposed method significantly reduces both cone angle and sparse angle artifacts in static CT helical scanning. The approach achieves rapid and high-precision helical reconstruction, showcasing superior accuracy and computational efficiency.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"1174-1189"},"PeriodicalIF":4.8,"publicationDate":"2025-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145011319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Non-Line-of-Sight mmW SAR Imaging With Equivariant Adaptive Threshold Learning 基于等变自适应阈值学习的毫米波非视距SAR成像
IF 4.8 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-08-11 DOI: 10.1109/TCI.2025.3597462
Xiang Cai;Shunjun Wei;Mou Wang;Hao Zhang;Kun Chen;Xinyuan Liu;Jun Shi;Guolong Cui
{"title":"Non-Line-of-Sight mmW SAR Imaging With Equivariant Adaptive Threshold Learning","authors":"Xiang Cai;Shunjun Wei;Mou Wang;Hao Zhang;Kun Chen;Xinyuan Liu;Jun Shi;Guolong Cui","doi":"10.1109/TCI.2025.3597462","DOIUrl":"https://doi.org/10.1109/TCI.2025.3597462","url":null,"abstract":"High-precision 2-D/3-D Synthetic Aperture Radar (SAR) image reconstruction from the indirect scattered echoes of hidden targets represents a core technical challenge in millimeter-wave (mmW) Non-Line-of-Sight (NLOS) environmental perception. Deep learning approaches have demonstrated exceptional performance in SAR imaging. However, existing methods are predominantly designed for Line-of-Sight (LOS) scenarios, where clean LOS simulation signals can be acquired for training purposes, a condition often difficult or impossible to meet in NLOS imaging due to complex multipath environments and noise. To tackle this issue within specific NLOS configurations, particularly those involving strong specular reflections from discrete, isolated hidden objects, we propose an Equivariant Imaging (EI) framework tailored for mmW SAR. The EI framework is a fully self-supervised learning approach that leverages the group invariance present in signal distributions, enabling robust image reconstruction from partial NLOS measurements contaminated with noise and multipath artifacts. In our method, the reconstruction function is based on a deep unfolding network with Total Variation (TV) constraints, mapping the NLOS scattered echoes to the target image. Moreover, we introduce an Adaptive Peak Convolution Network (APConv) into the reconstruction process to dynamically adjust thresholds, replacing traditional fixed-threshold methods. This enhances imaging flexibility and quality under these defined NLOS conditions. Finally, we validate the proposed method using various NLOS echo data collected through an experimental mmW system. Numerical and visual results both demonstrate the effectiveness of our approach for NLOS mmW SAR imaging tasks. The proposed EI framework thus offers a promising approach for advancing NLOS mmW SAR perception capabilities, particularly for environments and target configurations aligning with those investigated and supported by our current experiments.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"1190-1205"},"PeriodicalIF":4.8,"publicationDate":"2025-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145036849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Anomaly Distinguishability in an Asteroid Analogue Using Quasi-Monostatic Experimental Radar Measurements 利用准单站实验雷达测量的小行星模拟异常可分辨性
IF 4.8 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-08-11 DOI: 10.1109/TCI.2025.3597429
Yusuf Oluwatoki Yusuf;Astrid Dufaure;Liisa-Ida Sorsa;Christelle Eyraud;Jean-Michel Geffrin;Alain Hérique;Sampsa Pursiainen
{"title":"Anomaly Distinguishability in an Asteroid Analogue Using Quasi-Monostatic Experimental Radar Measurements","authors":"Yusuf Oluwatoki Yusuf;Astrid Dufaure;Liisa-Ida Sorsa;Christelle Eyraud;Jean-Michel Geffrin;Alain Hérique;Sampsa Pursiainen","doi":"10.1109/TCI.2025.3597429","DOIUrl":"https://doi.org/10.1109/TCI.2025.3597429","url":null,"abstract":"This study conducts a quantitative distinguishability analysis using quasi-monostatic experimental radar data to find a topographic and backpropagated tomographic reconstruction for an analogue of asteroid Itokawa (25143). In particular, we consider a combination of travel-time and wavefield backpropagation tomography using the time-frequency representation (TFR) and principal component analysis (PCA) approaches as filtering techniques. Furthermore, we hypothesise that the travel time of the main peaks in the signal can be projected as a topographic imaging of the analogue asteroid while also presenting a tomographic reconstruction based on the main peaks in the signal. We compare the performance of several different filtering approaches covering several noise levels and two hypothetical interior structures: homogeneous and detailed. Our results suggest that wavefield information is vital for obtaining an appropriate reconstruction quality regardless of the noise level and that different filters affect the distinguishability under different assumptions of the noise. The results also suggest that the main peaks of the measured signal can be used to topographically distinguish the signatures in the measurements, hence the interior structure of the different analogue asteroids. Similarly, a tomographic reconstruction with the main peaks of the measured signal can be used to distinguish the interior structure of the different analogue asteroids.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"1150-1160"},"PeriodicalIF":4.8,"publicationDate":"2025-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11122330","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144918162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LoFi: Neural Local Fields for Scalable Image Reconstruction LoFi:用于可扩展图像重建的神经局部域
IF 4.8 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2025-08-04 DOI: 10.1109/TCI.2025.3594983
AmirEhsan Khorashadizadeh;Tobïas I. Liaudat;Tianlin Liu;Jason D. McEwen;Ivan Dokmanić
{"title":"LoFi: Neural Local Fields for Scalable Image Reconstruction","authors":"AmirEhsan Khorashadizadeh;Tobïas I. Liaudat;Tianlin Liu;Jason D. McEwen;Ivan Dokmanić","doi":"10.1109/TCI.2025.3594983","DOIUrl":"https://doi.org/10.1109/TCI.2025.3594983","url":null,"abstract":"We introduce LoFi (Local Field)—a <italic>coordinate-based</i> framework for image reconstruction which combines advantages of convolutional neural networks (CNNs) and neural fields or implicit neural representations (INRs). Unlike conventional deep neural networks, LoFi reconstructs an image one coordinate at a time, by processing only adaptive local information from the input which is relevant for the target coordinate. Similar to INRs, LoFi can efficiently recover images at any continuous coordinate, enabling image reconstruction at multiple resolutions. LoFi achieves excellent generalization to out-of-distribution data with memory usage almost independent of image resolution, while performing as well or better than standard deep learning models like CNNs and vision transformers (ViTs). Remarkably, training on <inline-formula><tex-math>$1024 times 1024$</tex-math></inline-formula> images requires less than 200MB of memory—much less than standard CNNs and ViTs. Our experiments show that Locality enables training on extremely small datasets with ten or fewer samples without overfitting and without explicit regularization or early stopping.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"1128-1141"},"PeriodicalIF":4.8,"publicationDate":"2025-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11108275","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144909148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信