IEEE Transactions on Computational Imaging最新文献

筛选
英文 中文
Accelerated Wirtinger Flow With Score-Based Image Priors for Holographic Phase Retrieval in Poisson-Gaussian Noise Conditions 在泊松-高斯噪声条件下利用基于得分的图像先验进行全息相位检索的加速 Wirtinger 流程
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2024-09-26 DOI: 10.1109/TCI.2024.3458418
Zongyu Li;Jason Hu;Xiaojian Xu;Liyue Shen;Jeffrey A. Fessler
{"title":"Accelerated Wirtinger Flow With Score-Based Image Priors for Holographic Phase Retrieval in Poisson-Gaussian Noise Conditions","authors":"Zongyu Li;Jason Hu;Xiaojian Xu;Liyue Shen;Jeffrey A. Fessler","doi":"10.1109/TCI.2024.3458418","DOIUrl":"https://doi.org/10.1109/TCI.2024.3458418","url":null,"abstract":"Phase retrieval (PR) is a crucial problem in many imaging applications. This study focuses on holographic phase retrieval in situations where the measurements are degraded by a combination of Poisson and Gaussian noise, as commonly occurs in optical imaging systems. We propose a new algorithm called “AWFS” that uses accelerated Wirtinger flow (AWF) with a learned score function as a generative prior. Specifically, we formulate the PR problem as an optimization problem that incorporates both data fidelity and regularization terms. We calculate the gradient of the log-likelihood function for PR and determine its corresponding Lipschitz constant. Additionally, we introduce a generative prior in our regularization framework by using score matching to capture information about the gradient of image prior distributions. We provide theoretical analysis that establishes a critical-point convergence guarantee for one version of the proposed algorithm. The results of our simulation experiments on three different datasets show the following. 1) By using the PG likelihood model, a practical version of the proposed algorithm improves reconstruction compared to algorithms based solely on Gaussian or Poisson likelihoods. 2) The proposed score-based image prior method leads to better reconstruction quality than a method based on denoising diffusion probabilistic model (DDPM), as well as a plug-and-play alternating direction method of multipliers (PnP-ADMM) and regularization by denoising (RED).","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"10 ","pages":"1384-1399"},"PeriodicalIF":4.2,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142328360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spectral Memory-Enhanced Network With Local Non-Local and Low-Rank Priors for Hyperspectral Image Compressive Imaging
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2024-09-25 DOI: 10.1109/TCI.2024.3468615
Yangke Ying;Jin Wang;Yunhui Shi;Nam Ling;Baocai Yin
{"title":"Spectral Memory-Enhanced Network With Local Non-Local and Low-Rank Priors for Hyperspectral Image Compressive Imaging","authors":"Yangke Ying;Jin Wang;Yunhui Shi;Nam Ling;Baocai Yin","doi":"10.1109/TCI.2024.3468615","DOIUrl":"https://doi.org/10.1109/TCI.2024.3468615","url":null,"abstract":"The hyperspectral image (HSI) compressive imaging field has experienced significant progress in recent years, especially with the emergence of deep unfolding networks (DUNs), which have demonstrated remarkable advancements in reconstruction performance. However, these methods still face several challenges. Firstly, HSI data carries crucial prior knowledge in the feature space, and effectively leveraging these priors is essential for achieving high-quality HSI reconstruction. Existing methods either neglect the utilization of prior information or incorporate network modules designed based on prior information in a rudimentary manner, thereby limiting the overall reconstruction potential of these models. Secondly, the transformation between the data and feature domains poses a significant challenge for DUNs, leading to the loss of feature information across different stages. Existing methods fall short in adequately considering spectral characteristics when utilizing inter-stage information, resulting in inefficient transmission of feature information. In this paper, we introduce a novel deep unfolding network architecture that integrates local non-local and low-rank priors with spectral memory enhancement for precise HSI data reconstruction. Specifically, we design innovative modules for local non-local and low-rank priors to enrich the network's feature representation capability, fully exploiting the prior information of HSI data in the feature space. These designs also help the overall framework achieve superior reconstruction results with fewer parameters. Moreover, we extensively consider the spectral correlation characteristics of HSI data and devise a spectral memory enhancement network module to mitigate inter-stage feature information loss. Extensive experiments further demonstrate the superiority of our approach.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"10 ","pages":"1664-1679"},"PeriodicalIF":4.2,"publicationDate":"2024-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142753892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast Non-Line-of-Sight Imaging With Hybrid Super-Resolution Network Over 18 m 利用混合超级分辨率网络在 18 米范围内进行快速非视线成像
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2024-09-19 DOI: 10.1109/TCI.2024.3463964
Leping Xiao;Jianyu Wang;Yi Wang;Ziyu Zhan;Zuoqiang Shi;Lingyun Qiu;Xing Fu
{"title":"Fast Non-Line-of-Sight Imaging With Hybrid Super-Resolution Network Over 18 m","authors":"Leping Xiao;Jianyu Wang;Yi Wang;Ziyu Zhan;Zuoqiang Shi;Lingyun Qiu;Xing Fu","doi":"10.1109/TCI.2024.3463964","DOIUrl":"https://doi.org/10.1109/TCI.2024.3463964","url":null,"abstract":"Non-line-of-sight (NLOS) imaging technique aims at visualizing hidden objects from light of multiple reflections. For most existing methods, densely raster-scanned transients with long exposure time are routinely used, while approaches employing fewer points are confronted with a trade-off between the computation time and the image quality, both of which hinder the practical implementation of fast NLOS imaging. In this paper, we propose a hybrid super-resolution pipeline for image reconstruction and quality enhancement with only 8×8 scanning points. Besides, we implement a non-coaxial transceiver configuration and illustrate the first auto-calibration method for out-of-lab NLOS configuration, which costs only 40 s and performs well at a distance of 18.69 m. Results on both experimental data and public dataset indicate that the proposed method exhibits strong generalization capabilities, yielding faithful reconstructions with the resolution of 256×256 under different noise models. Furthermore, we demonstrate the importance of matching the noise model with the experimental dataset. We believe our approach shows great promise to NLOS imaging acceleration with lower acquisition, calibration and computation time.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"10 ","pages":"1439-1448"},"PeriodicalIF":4.2,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142430859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3D Helical CT Reconstruction With a Memory Efficient Learned Primal-Dual Architecture 利用高效内存学习型原始双架构进行三维螺旋 CT 重构
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2024-09-18 DOI: 10.1109/TCI.2024.3463485
Jevgenija Rudzusika;Buda Bajić;Thomas Koehler;Ozan Öktem
{"title":"3D Helical CT Reconstruction With a Memory Efficient Learned Primal-Dual Architecture","authors":"Jevgenija Rudzusika;Buda Bajić;Thomas Koehler;Ozan Öktem","doi":"10.1109/TCI.2024.3463485","DOIUrl":"10.1109/TCI.2024.3463485","url":null,"abstract":"Deep learning based computed tomography (CT) reconstruction has demonstrated outstanding performance on simulated 2D low-dose CT data. This applies in particular to domain adapted neural networks, which incorporate a handcrafted physics model for CT imaging. Empirical evidence shows that employing such architectures reduces the demand for training data and improves upon generalization. However, their training requires large computational resources that quickly become prohibitive in 3D helical CT, which is the most common acquisition geometry used for medical imaging. This paper modifies a domain adapted neural network architecture, the Learned Primal-Dual (LPD), so that it can be trained and applied to reconstruction in this setting. The main challenge is to reduce the GPU memory requirements during the training, while keeping the computational time within practical limits. Furthermore, clinical data also comes with other challenges not accounted for in simulations, like errors in flux measurement, resolution mismatch and, most importantly, the absence of the real ground truth. To the best of our knowledge, this work is the first to apply an unrolled deep learning architecture for reconstruction on full-sized clinical data, like those in the Low dose CT image and projection data set (LDCT).","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"10 ","pages":"1414-1424"},"PeriodicalIF":4.2,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142249426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Progressive Self-Supervised Learning for CASSI Computational Spectral Cameras 用于 CASSI 计算光谱相机的渐进式自我监督学习
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2024-09-18 DOI: 10.1109/TCI.2024.3463478
Xiaoyin Mei;Yuqi Li;Qiang Fu;Wolfgang Heidrich
{"title":"Progressive Self-Supervised Learning for CASSI Computational Spectral Cameras","authors":"Xiaoyin Mei;Yuqi Li;Qiang Fu;Wolfgang Heidrich","doi":"10.1109/TCI.2024.3463478","DOIUrl":"10.1109/TCI.2024.3463478","url":null,"abstract":"Compressive spectral imaging (CSI) is a technique used to capture high-dimensional hyperspectral images (HSIs) with a few multiplexed measurements, thereby reducing data acquisition costs and complexity. However, existing CSI methods often rely on end-to-end learning from training sets, which may struggle to generalize well to unseen scenes and phenomena. In this paper, we present a progressive self-supervised method specifically tailored for coded aperture snapshot spectral imaging (CASSI). Our proposed method enables HSI reconstruction solely from the measurements, without requiring any ground truth spectral data. To achieve this, we integrate positional encoding and spectral cluster-centroid features within a novel progressive training framework. Additionally, we employ an attention mechanism and a multi-scale architecture to enhance the robustness and accuracy of HSI reconstruction. Through extensive experiments on both synthetic and real datasets, we validate the effectiveness of our method. Our results demonstrate significantly superior performance compared to state-of-the-art self-supervised CASSI methods, while utilizing fewer parameters and consuming less memory. Furthermore, our proposed approach showcases competitive performance in terms of reconstruction quality when compared to state-of-the-art supervised methods.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"10 ","pages":"1505-1518"},"PeriodicalIF":4.2,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142249424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DS$^{2}$PN: A Two-Stage Direction-Aware Spectral-Spatial Perceptual Network for Hyperspectral Image Reconstruction DS $^{2}$ PN:用于高光谱图像重建的两级方向感知光谱空间感知网络
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2024-09-11 DOI: 10.1109/TCI.2024.3458421
Tiecheng Song;Zheng Zhang;Kaizhao Zhang;Anyong Qin;Feng Yang;Chenqiang Gao
{"title":"DS$^{2}$PN: A Two-Stage Direction-Aware Spectral-Spatial Perceptual Network for Hyperspectral Image Reconstruction","authors":"Tiecheng Song;Zheng Zhang;Kaizhao Zhang;Anyong Qin;Feng Yang;Chenqiang Gao","doi":"10.1109/TCI.2024.3458421","DOIUrl":"10.1109/TCI.2024.3458421","url":null,"abstract":"Coded aperture snapshot spectral imaging (CASSI) systems are designed to modulate and compress 3D hyperspectral images (HSIs) into 2D measurements, which can capture HSIs in dynamic scenes. How to faithfully recover 3D HSIs from 2D measurements becomes one of the challenges. Impressive results have been achieved by deep leaning methods based on convolutional neural networks and transformers, but the directional information is not thoroughly explored to reconstruct HSIs and evaluate the reconstruction quality. In view of this, we propose a two-stage direction-aware spectral-spatial perceptual network (DS\u0000<inline-formula><tex-math>$^{2}$</tex-math></inline-formula>\u0000PN) for HSI reconstruction. In the first stage, we design a frequency-based preliminary reconstruction subnetwork to roughly recover the global spectral-spatial information of HSIs via frequency interactions. In the second stage, we design a multi-directional spectral-spatial refinement subnetwork to recover the details of HSIs via directional attention mechanisms. To train the whole network, we build a pixel-level reconstruction loss for each subnetwork, and a feature-level multi-directional spectral-spatial perceptual loss which is specially tailored to high-dimensional HSIs. Experimental results show that our DS\u0000<inline-formula><tex-math>$^{2}$</tex-math></inline-formula>\u0000PN outperforms state-of-the-art methods in quantitative and qualitative evaluation for both simulation and real HSI reconstruction tasks.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"10 ","pages":"1346-1356"},"PeriodicalIF":4.2,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142177621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Zero-Shot Image Denoising for High-Resolution Electron Microscopy 用于高分辨率电子显微镜的零镜头图像去噪技术
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2024-09-11 DOI: 10.1109/TCI.2024.3458411
Xuanyu Tian;Zhuoya Dong;Xiyue Lin;Yue Gao;Hongjiang Wei;Yanhang Ma;Jingyi Yu;Yuyao Zhang
{"title":"Zero-Shot Image Denoising for High-Resolution Electron Microscopy","authors":"Xuanyu Tian;Zhuoya Dong;Xiyue Lin;Yue Gao;Hongjiang Wei;Yanhang Ma;Jingyi Yu;Yuyao Zhang","doi":"10.1109/TCI.2024.3458411","DOIUrl":"10.1109/TCI.2024.3458411","url":null,"abstract":"High-resolution electron microscopy (HREM) imaging technique is a powerful tool for directly visualizing a broad range of materials in real-space. However, it faces challenges in denoising due to ultra-low signal-to-noise ratio (SNR) and scarce data availability. In this work, we propose Noise2SR, a zero-shot self-supervised learning (ZS-SSL) denoising framework for HREM. Within our framework, we propose a super-resolution (SR) based self-supervised training strategy, incorporating the Random Sub-sampler module. The Random Sub-sampler is designed to generate approximate infinite noisy pairs from a single noisy image, serving as an effective data augmentation in zero-shot denoising. Noise2SR trains the network with paired noisy images of different resolutions, which is conducted via SR strategy. The SR-based training facilitates the network adopting more pixels for supervision, and the random sub-sampling helps compel the network to learn continuous signals enhancing the robustness. Meanwhile, we mitigate the uncertainty caused by random-sampling by adopting minimum mean squared error (MMSE) estimation for the denoised results. With the distinctive integration of training strategy and proposed designs, Noise2SR can achieve superior denoising performance using a single noisy HREM image. We evaluate the performance of Noise2SR in both simulated and real HREM denoising tasks. It outperforms state-of-the-art ZS-SSL methods and achieves comparable denoising performance with supervised methods. The success of Noise2SR suggests its potential for improving the SNR of images in material imaging domains.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"10 ","pages":"1462-1475"},"PeriodicalIF":4.2,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142177622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DDFSRM: Denoising Diffusion Fusion Model for Line-Scanning Super-Resolution DDFSRM:用于线扫描超分辨率的去噪扩散融合模型
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2024-09-10 DOI: 10.1109/TCI.2024.3458468
Rui Liu;Ying Xiao;Yini Peng;Xin Tian
{"title":"DDFSRM: Denoising Diffusion Fusion Model for Line-Scanning Super-Resolution","authors":"Rui Liu;Ying Xiao;Yini Peng;Xin Tian","doi":"10.1109/TCI.2024.3458468","DOIUrl":"10.1109/TCI.2024.3458468","url":null,"abstract":"Line-scanning super-resolution (LSSR) provides a new way to improve the spatial resolution of images. To further improve its super-resolution (SR) performance boosted by deep learning, a new denoising diffusion fusion super-resolution model (DDFSRM) is proposed in this paper. Considering the reconstruction optimization problem in LSSR is ill-posed, we first build a model-based fusion SR guidance and take the diffusion model sampling mean as an implicit prior learned from data to constrain the optimization model, which improves the model's accuracy. Then, the solution of the model is embedded in the iterative process of diffusion sampling. Finally, a posterior sampling model based on the denoising diffusion probabilistic model for LSSR task is obtained to achieve a good balance between denoising and SR capabilities by combining explicit and implicit priors. Both simulated and real experiments show that DDFSRM outperforms other state-of-the-art SR methods in both qualitative and quantitative evaluation.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"10 ","pages":"1357-1367"},"PeriodicalIF":4.2,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142177623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ProxNF: Neural Field Proximal Training for High-Resolution 4D Dynamic Image Reconstruction ProxNF:用于高分辨率 4D 动态图像重建的神经场近端训练
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2024-09-10 DOI: 10.1109/TCI.2024.3458397
Luke Lozenski;Refik Mert Cam;Mark D. Pagel;Mark A. Anastasio;Umberto Villa
{"title":"ProxNF: Neural Field Proximal Training for High-Resolution 4D Dynamic Image Reconstruction","authors":"Luke Lozenski;Refik Mert Cam;Mark D. Pagel;Mark A. Anastasio;Umberto Villa","doi":"10.1109/TCI.2024.3458397","DOIUrl":"10.1109/TCI.2024.3458397","url":null,"abstract":"Accurate spatiotemporal image reconstruction methods are needed for a wide range of biomedical research areas but face challenges due to data incompleteness and computational burden. Data incompleteness arises from the undersampling often required to increase frame rates, while computational burden emerges due to the memory footprint of high-resolution images with three spatial dimensions and extended time horizons. Neural fields (NFs), an emerging class of neural networks that act as continuous representations of spatiotemporal objects, have previously been introduced to solve these dynamic imaging problems by reframing image reconstruction as a problem of estimating network parameters. Neural fields can address the twin challenges of data incompleteness and computational burden by exploiting underlying redundancies in these spatiotemporal objects. This work proposes ProxNF, a novel neural field training approach for spatiotemporal image reconstruction leveraging proximal splitting methods to separate computations involving the imaging operator from updates of the network parameters. Specifically, ProxNF evaluates the (subsampled) gradient of the data-fidelity term in the image domain and uses a fully supervised learning approach to update the neural field parameters. This method is demonstrated in two numerical phantom studies and an in-vivo application to tumor perfusion imaging in small animal models using dynamic contrast-enhanced photoacoustic computed tomography (DCE PACT).","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"10 ","pages":"1368-1383"},"PeriodicalIF":4.2,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142177624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
OpEnCam: Lensless Optical Encryption Camera OPENCAM:无镜头光学加密摄像机
IF 4.2 2区 计算机科学
IEEE Transactions on Computational Imaging Pub Date : 2024-09-05 DOI: 10.1109/TCI.2024.3451953
Salman S. Khan;Xiang Yu;Kaushik Mitra;Manmohan Chandraker;Francesco Pittaluga
{"title":"OpEnCam: Lensless Optical Encryption Camera","authors":"Salman S. Khan;Xiang Yu;Kaushik Mitra;Manmohan Chandraker;Francesco Pittaluga","doi":"10.1109/TCI.2024.3451953","DOIUrl":"10.1109/TCI.2024.3451953","url":null,"abstract":"Lensless cameras multiplex the incoming light before it is recorded by the sensor. This ability to multiplex the incoming light has led to the development of ultra-thin, high-speed, and single-shot 3D imagers. Recently, there have been various attempts at demonstrating another useful aspect of lensless cameras - their ability to preserve the privacy of a scene by capturing encrypted measurements. However, existing lensless camera designs suffer numerous inherent privacy vulnerabilities. To demonstrate this, we develop the first comprehensive attack model for encryption cameras, and propose \u0000<sc>OpEnCam</small>\u0000– a novel lensless optical \u0000<bold>en</b>\u0000cryption \u0000<bold>ca</b>\u0000mera design that overcomes these vulnerabilities. \u0000<sc>OpEnCam</small>\u0000 encrypts the incoming light before capturing it using the modulating ability of optical masks. Recovery of the original scene from an \u0000<sc>OpEnCam</small>\u0000 measurement is possible only if one has access to the camera's encryption key, defined by the unique optical elements of each camera. Our \u0000<sc>OpEnCam</small>\u0000 design introduces two major improvements over existing lensless camera designs - (a) the use of two co-axially located optical masks, one stuck to the sensor and the other a few millimeters above the sensor and (b) the design of mask patterns, which are derived heuristically from signal processing ideas. We show, through experiments, that \u0000<sc>OpEnCam</small>\u0000 is robust against a range of attack types while still maintaining the imaging capabilities of existing lensless cameras. We validate the efficacy of \u0000<sc>OpEnCam</small>\u0000 using simulated and real data. Finally, we built and tested a prototype in the lab for proof-of-concept.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"10 ","pages":"1306-1316"},"PeriodicalIF":4.2,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142177625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信