Proceedings of SPIE--the International Society for Optical Engineering最新文献

筛选
英文 中文
First In-Vivo demonstration of 1000fps High Speed Coronary Angiography (HSCA) in a swine animal model. 首次在猪动物模型中进行 1000fps 高速冠状动脉造影 (HSCA) 的体内演示。
Proceedings of SPIE--the International Society for Optical Engineering Pub Date : 2024-02-01 Epub Date: 2024-04-02 DOI: 10.1117/12.3006858
S V Setlur Nagesh, E Vanderbilt, C Koenigsknecht, D Pionessa, V K Chivukula, C N Ionita, David M Zlotnick, D R Bednarek, S Rudin
{"title":"First In-Vivo demonstration of 1000fps High Speed Coronary Angiography (HSCA) in a swine animal model.","authors":"S V Setlur Nagesh, E Vanderbilt, C Koenigsknecht, D Pionessa, V K Chivukula, C N Ionita, David M Zlotnick, D R Bednarek, S Rudin","doi":"10.1117/12.3006858","DOIUrl":"10.1117/12.3006858","url":null,"abstract":"<p><p>High-speed-angiography (HSA) 1000 fps imaging was successfully used previously to visualize contrast media/blood flow in neurovascular anatomies. In this work we explore its usage in cardiovascular anatomies in a swine animal model. A 5 French catheter was guided into the right coronary artery of a swine, followed by the injection of iodine contrast through a computer-controlled injector at a controlled rate of 40 (ml/min). The injection process was captured using high-speed angiography at a rate of 1000 fps. The noise in the images was reduced using a custom built machine-learning model consisting of Long Short-term memory networks. From the noise reduced images, velocity profiles of contrast/blood flow through the artery was calculated using Horn-Schunck optical flow (OF) method. From the high-speed coronary angiography (HSCA) images, the bolus of contrast could be visually tracked with ease as it traversed from the catheter tip through the artery. The imaging technique's high temporal resolution effectively minimized motion artifacts resulting from the heart's activity. The OF results of the contrast injection show velocities in the artery ranging from 20 - 40 cm/s. The results demonstrate the potential of 1000 fps HSCA in cardiovascular imaging. The combined high spatial and temporal resolution offered by this technique allows for the derivation of velocity profiles throughout the artery's structure, including regions distal and proximal to stenoses. This information can potentially be used to determine the need for stenoses treatment. Further investigations are warranted to expand our understanding of the applications of HSCA in cardiovascular research and clinical practice.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12930 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11492795/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142482677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learned high-resolution cardiac CT imaging from ultra-high-resolution PCD-CT. 通过超高分辨率 PCD-CT 学习高分辨率心脏 CT 成像。
Proceedings of SPIE--the International Society for Optical Engineering Pub Date : 2024-02-01 Epub Date: 2024-04-01 DOI: 10.1117/12.3006463
Emily K Koons, Hao Gong, Andrew Missert, Shaojie Chang, Tim Winfree, Zhongxing Zhou, Cynthia H McCollough, Shuai Leng
{"title":"Learned high-resolution cardiac CT imaging from ultra-high-resolution PCD-CT.","authors":"Emily K Koons, Hao Gong, Andrew Missert, Shaojie Chang, Tim Winfree, Zhongxing Zhou, Cynthia H McCollough, Shuai Leng","doi":"10.1117/12.3006463","DOIUrl":"https://doi.org/10.1117/12.3006463","url":null,"abstract":"<p><p>Coronary computed tomography angiography (cCTA) is a widely used non-invasive diagnostic exam for patients with coronary artery disease (CAD). However, most clinical CT scanners are limited in spatial resolution from use of energy-integrating detectors (EIDs). Radiological evaluation of CAD is challenging, as coronary arteries are small (3-4 mm diameter) and calcifications within them are highly attenuating, leading to blooming artifacts. As such, this is a task well suited for high spatial resolution. Recently, photon-counting-detector (PCD) CT became commercially available, allowing for ultra-high resolution (UHR) data acquisition. However, PCD-CTs are costly, restricting widespread accessibility. To address this problem, we propose a super resolution convolutional neural network (CNN): ILUMENATE (<b>I</b>mproved <b>LUMEN</b> visualization through <b>A</b>rtificial super-resolu<b>T</b>ion imag<b>E</b>s), creating a high resolution (HR) image simulating UHR PCD-CT. The network was trained and validated using patches extracted from 8 patients with a modified U-Net architecture. Training input and labels consisted of UHR PCD-CT images reconstructed with a smooth kernel degrading resolution (LR input) and sharp kernel (HR label). The network learned the resolution difference and was tested on 5 unseen LR patients. We evaluated network performance quantitatively and qualitatively through visual inspection, line profiles to assess spatial resolution improvements, ROIs for CT number stability and noise assessment, structural similarity index (SSIM), and percent diameter luminal stenosis. Overall, ILUMENATE improved images quantitatively and qualitatively, creating sharper edges more closely resembling reconstructed HR reference images, maintained stable CT numbers with less than 4% difference, reduced noise by 28%, maintained structural similarity (average SSIM = 0.70), and reduced percent diameter stenosis with respect to input images. ILUMENATE demonstrates potential impact for CAD patient management, improving the quality of LR CT images bringing them closer to UHR PCD-CT images.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12925 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11008336/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140866975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Colorectal Cancer Tumor Bud Detection Using Deep Learning from Routine H&E-Stained Slides. 利用深度学习从常规 H&E 染色切片中提高结直肠癌肿瘤芽检测能力
Proceedings of SPIE--the International Society for Optical Engineering Pub Date : 2024-02-01 Epub Date: 2024-04-03 DOI: 10.1117/12.3006796
Usama Sajjad, Wei Chen, Mostafa Rezapour, Ziyu Su, Thomas Tavolara, Wendy L Frankel, Metin N Gurcan, M Khalid Khan Niazi
{"title":"Enhancing Colorectal Cancer Tumor Bud Detection Using Deep Learning from Routine H&E-Stained Slides.","authors":"Usama Sajjad, Wei Chen, Mostafa Rezapour, Ziyu Su, Thomas Tavolara, Wendy L Frankel, Metin N Gurcan, M Khalid Khan Niazi","doi":"10.1117/12.3006796","DOIUrl":"10.1117/12.3006796","url":null,"abstract":"<p><p>Tumor budding refers to a cluster of one to four tumor cells located at the tumor-invasive front. While tumor budding is a prognostic factor for colorectal cancer, counting and grading tumor budding are time consuming and not highly reproducible. There could be high inter- and intra-reader disagreement on H&E evaluation. This leads to the noisy training (imperfect ground truth) of deep learning algorithms, resulting in high variability and losing their ability to generalize on unseen datasets. Pan-cytokeratin staining is one of the potential solutions to enhance the agreement, but it is not routinely used to identify tumor buds and can lead to false positives. Therefore, we aim to develop a weakly-supervised deep learning method for tumor bud detection from routine H&E-stained images that does not require strict tissue-level annotations. We also propose <i>Bayesian Multiple Instance Learning</i> (BMIL) that combines multiple annotated regions during the training process to further enhance the generalizability and stability in tumor bud detection. Our dataset consists of 29 colorectal cancer H&E-stained images that contain 115 tumor buds per slide on average. In six-fold cross-validation, our method demonstrated an average precision and recall of 0.94, and 0.86 respectively. These results provide preliminary evidence of the feasibility of our approach in improving the generalizability in tumor budding detection using H&E images while avoiding the need for non-routine immunohistochemical staining methods.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12933 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11095418/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140946681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CAFES: Chest X-ray Analysis using Federated Self-supervised Learning for Pediatric COVID-19 Detection. CAFES:利用联合自监督学习进行胸部X光分析,以检测小儿COVID-19。
Proceedings of SPIE--the International Society for Optical Engineering Pub Date : 2024-02-01 Epub Date: 2024-04-03 DOI: 10.1117/12.3008757
Abhijeet Parida, Syed Muhammad Anwar, Malhar P Patel, Mathias Blom, Tal Tiano Einat, Alex Tonetti, Yuval Baror, Ittai Dayan, Marius George Linguraru
{"title":"CAFES: Chest X-ray Analysis using Federated Self-supervised Learning for Pediatric COVID-19 Detection.","authors":"Abhijeet Parida, Syed Muhammad Anwar, Malhar P Patel, Mathias Blom, Tal Tiano Einat, Alex Tonetti, Yuval Baror, Ittai Dayan, Marius George Linguraru","doi":"10.1117/12.3008757","DOIUrl":"10.1117/12.3008757","url":null,"abstract":"<p><p>Chest X-rays (CXRs) play a pivotal role in cost-effective clinical assessment of various heart and lung related conditions. The urgency of COVID-19 diagnosis prompted their use in identifying conditions like lung opacity, pneumonia, and acute respiratory distress syndrome in pediatric patients. We propose an AI-driven solution for binary COVID-19 versus non-COVID-19 classification in pediatric CXRs. We present a Federated Self-Supervised Learning (FSSL) framework to enhance Vision Transformer (ViT) performance for COVID-19 detection in pediatric CXRs. ViT's prowess in vision-related binary classification tasks, combined with self-supervised pre-training on adult CXR data, forms the basis of the FSSL approach. We implement our strategy on the Rhino Health Federated Computing Platform (FCP), which ensures privacy and scalability for distributed data. The chest X-ray analysis using the federated SSL (CAFES) model, utilizes the FSSL-pre-trained ViT weights and demonstrated gains in accurately detecting COVID-19 when compared with a fully supervised model. Our FSSL-pre-trained ViT showed an area under the precision-recall curve (AUPR) of 0.952, which is 0.231 points higher than the fully supervised model for COVID-19 diagnosis using pediatric data. Our contributions include leveraging vision transformers for effective COVID-19 diagnosis from pediatric CXRs, employing distributed federated learning-based self-supervised pre-training on adult data, and improving pediatric COVID-19 diagnosis performance. This privacy-conscious approach aligns with HIPAA guidelines, paving the way for broader medical imaging applications.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12927 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11167651/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141319196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How accurately can quantitative imaging methods be ranked without ground truth: An upper bound on no-gold-standard evaluation. 在没有基础真理的情况下,定量成像方法的排名有多准确:无金标准评价的上界。
Proceedings of SPIE--the International Society for Optical Engineering Pub Date : 2024-02-01 Epub Date: 2024-03-29 DOI: 10.1117/12.3006888
Yan Liu, Abhinav K Jha
{"title":"How accurately can quantitative imaging methods be ranked without ground truth: An upper bound on no-gold-standard evaluation.","authors":"Yan Liu, Abhinav K Jha","doi":"10.1117/12.3006888","DOIUrl":"10.1117/12.3006888","url":null,"abstract":"<p><p>Objective evaluation of quantitative imaging (QI) methods with patient data, while important, is typically hindered by the lack of gold standards. To address this challenge, no-gold-standard evaluation (NGSE) techniques have been proposed. These techniques have demonstrated efficacy in accurately ranking QI methods without access to gold standards. The development of NGSE methods has raised an important question: how accurately can QI methods be ranked without ground truth. To answer this question, we propose a Cramér-Rao bound (CRB)-based framework that quantifies the upper bound in ranking QI methods without any ground truth. We present the application of this framework in guiding the use of a well-known NGSE technique, namely the regression-without-truth (RWT) technique. Our results show the utility of this framework in quantifying the performance of this NGSE technique for different patient numbers. These results provide motivation towards studying other applications of this upper bound.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12929 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11601990/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142752608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human microscopic vagus nerve anatomy using deep learning on 3D-MUSE images. 基于3D-MUSE图像的深度学习人体显微迷走神经解剖。
Proceedings of SPIE--the International Society for Optical Engineering Pub Date : 2024-02-01 Epub Date: 2024-04-02 DOI: 10.1117/12.3009682
Naomi Joseph, Chaitanya Kolluru, James Seckler, Jun Chen, Justin Kim, Michael Jenkins, Andrew Shofstall, Nikki Pelot, David L Wilson
{"title":"Human microscopic vagus nerve anatomy using deep learning on 3D-MUSE images.","authors":"Naomi Joseph, Chaitanya Kolluru, James Seckler, Jun Chen, Justin Kim, Michael Jenkins, Andrew Shofstall, Nikki Pelot, David L Wilson","doi":"10.1117/12.3009682","DOIUrl":"10.1117/12.3009682","url":null,"abstract":"<p><p>We are microscopically imaging and analyzing the human vagus nerve (VN) anatomy to create the first ever VN connectome to support modeling of neuromodulation therapies. Although micro-CT and MRI roughly identify vagus nerve anatomy, they lack the spatial resolution required to identify small fascicle splitting and merging, and perineurium boundaries. We developed 3D serial block-face Microscopy with Ultraviolet Surface Excitation (3D-MUSE), with 0.9-μm in-plane resolution and 3-μm cut thickness. 3D-MUSE is ideal for VN imaging, capturing large myelinated fibers, connective sheaths, fascicle dynamics, and nerve bundle tractography. Each 3-mm 3D-MUSE ROI generates ~1,000 grayscale images, necessitating automatic segmentation as over 50-hrs were spent manually annotating fascicles, perineurium, and epineurium in every 20th image, giving 50 images. We trained three types of multi-class deep learning segmentation models. First, 25 annotated images trained a 2D U-Net and Attention U-Net. Second, we trained a Vision Transformer (ViT) using self-supervised learning with 200 unlabeled images before refining the ViT's initialized weights of a U-Net Transformer with 25 training images and labels. Third, we created pseudo-3D images by concatenating each annotated image with an image ±k slices apart (k=1,10), and trained a 2D U-Net similarly. All models were tested on 25 held-out images and evaluated using Dice. While all trained models performed comparably, the 2D U-Net model trained on pseudo-3D images demonstrated highest Dice values (0.936). With sample-based-training, one obtains very promising results on thousands of images in terms of segmentation and nerve fiber tractography estimation. Additional training from more samples could obtain excellent results.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12930 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12433149/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145066702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluation of data uncertainty for deep-learning-based CT noise reduction using ensemble patient data and a virtual imaging trial framework. 利用集合患者数据和虚拟成像试验框架,评估基于深度学习的 CT 降噪的数据不确定性。
Proceedings of SPIE--the International Society for Optical Engineering Pub Date : 2024-02-01 Epub Date: 2024-04-01 DOI: 10.1117/12.3008581
Zhongxing Zhou, Scott S Hsieh, Hao Gong, Cynthia H McCollough, Lifeng Yu
{"title":"Evaluation of data uncertainty for deep-learning-based CT noise reduction using ensemble patient data and a virtual imaging trial framework.","authors":"Zhongxing Zhou, Scott S Hsieh, Hao Gong, Cynthia H McCollough, Lifeng Yu","doi":"10.1117/12.3008581","DOIUrl":"https://doi.org/10.1117/12.3008581","url":null,"abstract":"<p><p>Deep learning-based image reconstruction and noise reduction (DLIR) methods have been increasingly deployed in clinical CT. Accurate assessment of their data uncertainty properties is essential to understand the stability of DLIR in response to noise. In this work, we aim to evaluate the data uncertainty of a DLIR method using real patient data and a virtual imaging trial framework and compare it with filtered-backprojection (FBP) and iterative reconstruction (IR). The ensemble of noise realizations was generated by using a realistic projection domain noise insertion technique. The impact of varying dose levels and denoising strengths were investigated for a ResNet-based deep convolutional neural network (DCNN) model trained using patient images. On the uncertainty maps, DCNN shows more detailed structures than IR although its bias map has less structural dependency, which implies that DCNN is more sensitive to small changes in the input. Both visual examples and histogram analysis demonstrated that hotspots of uncertainty in DCNN may be associated with a higher chance of distortion from the truth than IR, but it may also correspond to a better detection performance for some of the small structures.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12925 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11008675/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140871749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated Web-based Software for CT Quality Control Testing of Low-contrast Detectability using Model Observers. 利用模型观测器对低对比度可探测性进行 CT 质量控制测试的自动化网络软件。
Proceedings of SPIE--the International Society for Optical Engineering Pub Date : 2024-02-01 Epub Date: 2024-04-03 DOI: 10.1117/12.3008777
Zhongxing Zhou, Jarod Wellinghoff, Mingdong Fan, Scott Hsieh, David Holmes, Cynthia H McCollough, Lifeng Yu
{"title":"Automated Web-based Software for CT Quality Control Testing of Low-contrast Detectability using Model Observers.","authors":"Zhongxing Zhou, Jarod Wellinghoff, Mingdong Fan, Scott Hsieh, David Holmes, Cynthia H McCollough, Lifeng Yu","doi":"10.1117/12.3008777","DOIUrl":"https://doi.org/10.1117/12.3008777","url":null,"abstract":"<p><p>The Channelized Hotelling observer (CHO) is well correlated with human observer performance in many CT detection/classification tasks but has not been widely adopted in routine CT quality control and performance evaluation, mainly because of the lack of an easily available, efficient, and validated software tool. We developed a highly automated solution - CT image quality evaluation and Protocol Optimization (CTPro), a web-based software platform that includes CHO and other traditional image quality assessment tools such as modulation transfer function and noise power spectrum. This tool can allow easy access to the CHO for both the research and clinical community and enable efficient, accurate image quality evaluation without the need of installing additional software. Its application was demonstrated by comparing the low-contrast detectability on a clinical photon-counting-detector (PCD)-CT with a traditional energy-integrating-detector (EID)-CT, which showed UHR-T3D had 6.2% higher d' than EID-CT with IR (p = 0.047) and 4.1% lower d' without IR (p = 0.122).</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12925 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11008424/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140874176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A hyperspectral surgical microscope with super-resolution reconstruction for intraoperative image guidance. 用于术中图像引导的超分辨率重建超光谱手术显微镜。
Proceedings of SPIE--the International Society for Optical Engineering Pub Date : 2024-02-01 Epub Date: 2024-04-02 DOI: 10.1117/12.3008789
Ling Ma, Kelden Pruitt, Baowei Fei
{"title":"A hyperspectral surgical microscope with super-resolution reconstruction for intraoperative image guidance.","authors":"Ling Ma, Kelden Pruitt, Baowei Fei","doi":"10.1117/12.3008789","DOIUrl":"10.1117/12.3008789","url":null,"abstract":"<p><p>Hyperspectral imaging (HSI) is an emerging imaging modality in medical applications, especially for intraoperative image guidance. A surgical microscope improves surgeons' visualization with fine details during surgery. The combination of HSI and surgical microscope can provide a powerful tool for surgical guidance. However, to acquire high-resolution hyperspectral images, the long integration time and large image file size can be a burden for intraoperative applications. Super-resolution reconstruction allows acquisition of low-resolution hyperspectral images and generates high-resolution HSI. In this work, we developed a hyperspectral surgical microscope and employed our unsupervised super-resolution neural network, which generated high-resolution hyperspectral images with fine textures and spectral characteristics of tissues. The proposed method can reduce the acquisition time and save storage space taken up by hyperspectral images without compromising image quality, which will facilitate the adaptation of hyperspectral imaging technology in intraoperative image guidance.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12930 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11093589/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140924175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fourier Diffusion for Sparse CT Reconstruction. 用于稀疏 CT 重建的傅立叶扩散。
Proceedings of SPIE--the International Society for Optical Engineering Pub Date : 2024-02-01 Epub Date: 2024-04-01 DOI: 10.1117/12.3008622
Anqi Liu, Grace J Gang, J Webster Stayman
{"title":"Fourier Diffusion for Sparse CT Reconstruction.","authors":"Anqi Liu, Grace J Gang, J Webster Stayman","doi":"10.1117/12.3008622","DOIUrl":"10.1117/12.3008622","url":null,"abstract":"<p><p>Sparse CT reconstruction continues to be an area of interest in a number of novel imaging systems. Many different approaches have been tried including model-based methods, compressed sensing approaches, and most recently deep-learning-based processing. Diffusion models, in particular, have become extremely popular due to their ability to effectively encode rich information about images and to allow for posterior sampling to generate many possible outputs. One drawback of diffusion models is that their recurrent structure tends to be computationally expensive. In this work we apply a new Fourier diffusion approach that permits processing with many fewer time steps than the standard scalar diffusion model. We present an extension of the Fourier diffusion technique and evaluate it in a simulated breast cone-beam CT system with a sparse view acquisition.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12925 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11378968/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142156827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信