Proceedings of SPIE--the International Society for Optical Engineering最新文献

筛选
英文 中文
Quantitative accuracy of lung function measurement using parametric response mapping: A virtual imaging study. 使用参数响应图测量肺功能的定量准确性:虚拟成像研究
Proceedings of SPIE--the International Society for Optical Engineering Pub Date : 2024-02-01 Epub Date: 2024-04-03 DOI: 10.1117/12.3006833
Amar Kavuri, Fong Chi Ho, Mobina Ghojogh-Nejad, Saman Sotoudeh-Paima, Ehsan Samei, W Paul Segars, Ehsan Abadi
{"title":"Quantitative accuracy of lung function measurement using parametric response mapping: A virtual imaging study.","authors":"Amar Kavuri, Fong Chi Ho, Mobina Ghojogh-Nejad, Saman Sotoudeh-Paima, Ehsan Samei, W Paul Segars, Ehsan Abadi","doi":"10.1117/12.3006833","DOIUrl":"10.1117/12.3006833","url":null,"abstract":"<p><p>Parametric response mapping (PRM) is a voxel-based quantitative CT imaging biomarker that measures the severity of chronic obstructive pulmonary disease (COPD) by analyzing both inspiratory and expiratory CT scans. Although PRM-derived measurements have been shown to predict disease severity and phenotyping, their quantitative accuracy is impacted by the variability of scanner settings and patient conditions. The aim of this study was to evaluate the variability of PRM-based measurements due to the changes in the scanner types and configurations. We developed 10 human chest models with emphysema and air-trapping at end-inspiration and end-expiration states. These models were virtually imaged using a scanner-specific CT simulator (DukeSim) to create CT images at different acquisition settings for energy-integrating and photon-counting CT systems. The CT images were used to estimate PRM maps. The quantified measurements were compared with ground truth values to evaluate the deviations in the measurements. Results showed that PRM measurements varied with scanner type and configurations. The emphysema volume was overestimated by 3 ± 9.5 % (mean ± standard deviation) of the lung volume, and the functional small airway disease (fSAD) volume was underestimated by 7.5±19 % of the lung volume. PRM measurements were more accurate and precise when the acquired settings were photon-counting CT, higher dose, smoother kernel, and larger pixel size. This study demonstrates the development and utility of virtual imaging tools for systematic assessment of a quantitative biomarker accuracy.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12927 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11100024/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141066491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving Stenosis Assessment in Energy Integrating Detector CT via Learned Monoenergetic Imaging Capability. 通过学习单能量成像能力改进能量集成探测器 CT 的狭窄评估。
Proceedings of SPIE--the International Society for Optical Engineering Pub Date : 2024-02-01 Epub Date: 2024-04-01 DOI: 10.1117/12.3006468
Shaojie Chang, Emily K Koons, Hao Gong, Jamison E Thorne, Cynthia H McCollough, Shuai Leng
{"title":"Improving Stenosis Assessment in Energy Integrating Detector CT via Learned Monoenergetic Imaging Capability.","authors":"Shaojie Chang, Emily K Koons, Hao Gong, Jamison E Thorne, Cynthia H McCollough, Shuai Leng","doi":"10.1117/12.3006468","DOIUrl":"https://doi.org/10.1117/12.3006468","url":null,"abstract":"<p><p>Coronary CT angiography (cCTA) is a fast non-invasive imaging exam for coronary artery disease (CAD) but struggles with dense calcifications and stents due to blooming artifacts, potentially causing stenosis overestimation. Virtual monoenergetic images (VMIs) at higher keV (e.g., 100 keV) from photon counting detector (PCD) CT have shown promise in reducing blooming artifacts and improving lumen visibility through its simultaneous high-resolution and multi-energy imaging capability. However, most cCTA exams are performed with single-energy CT (SECT) using conventional energy-integrating detectors (EID). Generating VMIs through EID-CT requires advanced multi-energy CT (MECT) scanners and potentially sacrifices temporal resolution. Given these limitations, MECT cCTA exams are not commonly performed on EID-CT and VMIs are not routinely generated. To tackle this, we aim to enhance the multi-energy imaging capability of EID-CT through the utilization of a convolutional neural network to LEarn MONoenergetic imAging from VMIs at Different Energies (LEMONADE). The neural network was trained using ten patient cCTA exams acquired on a clinical PCD-CT (NAEOTOM Alpha, Siemens Healthineers), with 70 keV VMIs as input (which is nominally equivalent to the SECT from EID-CT scanned at 120 kV) and 100 keV VMIs as the target. Subsequently, we evaluated the performance of EID-CT equipped with LEMONADE on both phantom and patient cases (n=10) for stenosis assessment. Results indicated that LEMONADE accurately quantified stenosis in three phantoms, aligning closely with ground truth and demonstrating stenosis percentage area reductions of 13%, 8%, and 9%. In patient cases, it led to a 12.9% reduction in average diameter luminal stenosis when compared to the original SECT without LEMONADE. These outcomes highlight LEMONADE's capacity to enable multi-energy CT imaging, mitigate blooming artifacts, and improve stenosis assessment for the widely available EID-CT. This has a high potential impact as most cCTA exams are performed on EID-CT.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12925 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11014427/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140874090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluation of Mean Shift, ComBat, and CycleGAN for Harmonizing Brain Connectivity Matrices Across Sites. 评估 Mean Shift、ComBat 和 CycleGAN 在协调各站点大脑连接矩阵方面的效果。
Proceedings of SPIE--the International Society for Optical Engineering Pub Date : 2024-02-01 Epub Date: 2024-04-02 DOI: 10.1117/12.3005563
Hanliang Xu, Nancy R Newlin, Michael E Kim, Chenyu Gao, Praitayini Kanakaraj, Aravind R Krishnan, Lucas W Remedios, Nazirah Mohd Khairi, Kimberly Pechman, Derek Archer, Timothy J Hohman, Angela L Jefferson, Ivana Isgum, Yuankai Huo, Daniel Moyer, Kurt G Schilling, Bennett A Landman
{"title":"Evaluation of Mean Shift, ComBat, and CycleGAN for Harmonizing Brain Connectivity Matrices Across Sites.","authors":"Hanliang Xu, Nancy R Newlin, Michael E Kim, Chenyu Gao, Praitayini Kanakaraj, Aravind R Krishnan, Lucas W Remedios, Nazirah Mohd Khairi, Kimberly Pechman, Derek Archer, Timothy J Hohman, Angela L Jefferson, Ivana Isgum, Yuankai Huo, Daniel Moyer, Kurt G Schilling, Bennett A Landman","doi":"10.1117/12.3005563","DOIUrl":"10.1117/12.3005563","url":null,"abstract":"<p><p>Connectivity matrices derived from diffusion MRI (dMRI) provide an interpretable and generalizable way of understanding the human brain connectome. However, dMRI suffers from inter-site and between-scanner variation, which impedes analysis across datasets to improve robustness and reproducibility of results. To evaluate different harmonization approaches on connectivity matrices, we compared graph measures derived from these matrices before and after applying three harmonization techniques: mean shift, ComBat, and CycleGAN. The sample comprises 168 age-matched, sex-matched normal subjects from two studies: the Vanderbilt Memory and Aging Project (VMAP) and the Biomarkers of Cognitive Decline Among Normal Individuals (BIOCARD). First, we plotted the graph measures and used coefficient of variation (CoV) and the Mann-Whitney U test to evaluate different methods' effectiveness in removing site effects on the matrices and the derived graph measures. ComBat effectively eliminated site effects for global efficiency and modularity and outperformed the other two methods. However, all methods exhibited poor performance when harmonizing average betweenness centrality. Second, we tested whether our harmonization methods preserved correlations between age and graph measures. All methods except for CycleGAN in one direction improved correlations between age and global efficiency and between age and modularity from insignificant to significant with p-values less than 0.05.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12926 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11415266/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142302985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assessment of Subject Head Motion in Diffusion MRI. 弥散核磁共振成像中的受试者头部运动评估
Proceedings of SPIE--the International Society for Optical Engineering Pub Date : 2024-02-01 Epub Date: 2024-04-02 DOI: 10.1117/12.3006633
Ema Topolnjak, Chenyu Gao, Lori L Beason-Held, Susan M Resnick, Kurt G Schilling, Bennett A Landman
{"title":"Assessment of Subject Head Motion in Diffusion MRI.","authors":"Ema Topolnjak, Chenyu Gao, Lori L Beason-Held, Susan M Resnick, Kurt G Schilling, Bennett A Landman","doi":"10.1117/12.3006633","DOIUrl":"10.1117/12.3006633","url":null,"abstract":"<p><p>Subject head motion during the acquisition of diffusion-weighted imaging (DWI) of the brain induces artifacts and affects image quality. Information about the frequency and extent of motion could reveal which aspects of motion correction are most necessary. Therefore, we investigate the extent of translation and rotation among participants, and how the motion changes during the scan acquisition. We analyze 5,380 DWI scans from 1,034 participants. We measure the rotations and translations in the sagittal, coronal and transverse planes needed to align the volumes to the first and previous volumes, as well as the displacement. The different types of motion are compared with each other and compared over time. The largest rotation (per minute) is around the right - left axis (median 0.378 °/min, range 0.000 - 11.466°) and the largest translation (per minute) is along the anterior - posterior axis (median 1.867 mm/min, range 0.000 - 10.944 mm). We additionally observe that spikes in movement occur at the beginning of the scan, particularly in anterior - posterior translation. The results show that all scans are affected by subtle head motion, which may impact subsequent image analysis.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12926 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11364405/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142115749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
First In-Vivo demonstration of 1000fps High Speed Coronary Angiography (HSCA) in a swine animal model. 首次在猪动物模型中进行 1000fps 高速冠状动脉造影 (HSCA) 的体内演示。
Proceedings of SPIE--the International Society for Optical Engineering Pub Date : 2024-02-01 Epub Date: 2024-04-02 DOI: 10.1117/12.3006858
S V Setlur Nagesh, E Vanderbilt, C Koenigsknecht, D Pionessa, V K Chivukula, C N Ionita, David M Zlotnick, D R Bednarek, S Rudin
{"title":"First In-Vivo demonstration of 1000fps High Speed Coronary Angiography (HSCA) in a swine animal model.","authors":"S V Setlur Nagesh, E Vanderbilt, C Koenigsknecht, D Pionessa, V K Chivukula, C N Ionita, David M Zlotnick, D R Bednarek, S Rudin","doi":"10.1117/12.3006858","DOIUrl":"10.1117/12.3006858","url":null,"abstract":"<p><p>High-speed-angiography (HSA) 1000 fps imaging was successfully used previously to visualize contrast media/blood flow in neurovascular anatomies. In this work we explore its usage in cardiovascular anatomies in a swine animal model. A 5 French catheter was guided into the right coronary artery of a swine, followed by the injection of iodine contrast through a computer-controlled injector at a controlled rate of 40 (ml/min). The injection process was captured using high-speed angiography at a rate of 1000 fps. The noise in the images was reduced using a custom built machine-learning model consisting of Long Short-term memory networks. From the noise reduced images, velocity profiles of contrast/blood flow through the artery was calculated using Horn-Schunck optical flow (OF) method. From the high-speed coronary angiography (HSCA) images, the bolus of contrast could be visually tracked with ease as it traversed from the catheter tip through the artery. The imaging technique's high temporal resolution effectively minimized motion artifacts resulting from the heart's activity. The OF results of the contrast injection show velocities in the artery ranging from 20 - 40 cm/s. The results demonstrate the potential of 1000 fps HSCA in cardiovascular imaging. The combined high spatial and temporal resolution offered by this technique allows for the derivation of velocity profiles throughout the artery's structure, including regions distal and proximal to stenoses. This information can potentially be used to determine the need for stenoses treatment. Further investigations are warranted to expand our understanding of the applications of HSCA in cardiovascular research and clinical practice.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12930 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11492795/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142482677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learned high-resolution cardiac CT imaging from ultra-high-resolution PCD-CT. 通过超高分辨率 PCD-CT 学习高分辨率心脏 CT 成像。
Proceedings of SPIE--the International Society for Optical Engineering Pub Date : 2024-02-01 Epub Date: 2024-04-01 DOI: 10.1117/12.3006463
Emily K Koons, Hao Gong, Andrew Missert, Shaojie Chang, Tim Winfree, Zhongxing Zhou, Cynthia H McCollough, Shuai Leng
{"title":"Learned high-resolution cardiac CT imaging from ultra-high-resolution PCD-CT.","authors":"Emily K Koons, Hao Gong, Andrew Missert, Shaojie Chang, Tim Winfree, Zhongxing Zhou, Cynthia H McCollough, Shuai Leng","doi":"10.1117/12.3006463","DOIUrl":"https://doi.org/10.1117/12.3006463","url":null,"abstract":"<p><p>Coronary computed tomography angiography (cCTA) is a widely used non-invasive diagnostic exam for patients with coronary artery disease (CAD). However, most clinical CT scanners are limited in spatial resolution from use of energy-integrating detectors (EIDs). Radiological evaluation of CAD is challenging, as coronary arteries are small (3-4 mm diameter) and calcifications within them are highly attenuating, leading to blooming artifacts. As such, this is a task well suited for high spatial resolution. Recently, photon-counting-detector (PCD) CT became commercially available, allowing for ultra-high resolution (UHR) data acquisition. However, PCD-CTs are costly, restricting widespread accessibility. To address this problem, we propose a super resolution convolutional neural network (CNN): ILUMENATE (<b>I</b>mproved <b>LUMEN</b> visualization through <b>A</b>rtificial super-resolu<b>T</b>ion imag<b>E</b>s), creating a high resolution (HR) image simulating UHR PCD-CT. The network was trained and validated using patches extracted from 8 patients with a modified U-Net architecture. Training input and labels consisted of UHR PCD-CT images reconstructed with a smooth kernel degrading resolution (LR input) and sharp kernel (HR label). The network learned the resolution difference and was tested on 5 unseen LR patients. We evaluated network performance quantitatively and qualitatively through visual inspection, line profiles to assess spatial resolution improvements, ROIs for CT number stability and noise assessment, structural similarity index (SSIM), and percent diameter luminal stenosis. Overall, ILUMENATE improved images quantitatively and qualitatively, creating sharper edges more closely resembling reconstructed HR reference images, maintained stable CT numbers with less than 4% difference, reduced noise by 28%, maintained structural similarity (average SSIM = 0.70), and reduced percent diameter stenosis with respect to input images. ILUMENATE demonstrates potential impact for CAD patient management, improving the quality of LR CT images bringing them closer to UHR PCD-CT images.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12925 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11008336/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140866975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Colorectal Cancer Tumor Bud Detection Using Deep Learning from Routine H&E-Stained Slides. 利用深度学习从常规 H&E 染色切片中提高结直肠癌肿瘤芽检测能力
Proceedings of SPIE--the International Society for Optical Engineering Pub Date : 2024-02-01 Epub Date: 2024-04-03 DOI: 10.1117/12.3006796
Usama Sajjad, Wei Chen, Mostafa Rezapour, Ziyu Su, Thomas Tavolara, Wendy L Frankel, Metin N Gurcan, M Khalid Khan Niazi
{"title":"Enhancing Colorectal Cancer Tumor Bud Detection Using Deep Learning from Routine H&E-Stained Slides.","authors":"Usama Sajjad, Wei Chen, Mostafa Rezapour, Ziyu Su, Thomas Tavolara, Wendy L Frankel, Metin N Gurcan, M Khalid Khan Niazi","doi":"10.1117/12.3006796","DOIUrl":"10.1117/12.3006796","url":null,"abstract":"<p><p>Tumor budding refers to a cluster of one to four tumor cells located at the tumor-invasive front. While tumor budding is a prognostic factor for colorectal cancer, counting and grading tumor budding are time consuming and not highly reproducible. There could be high inter- and intra-reader disagreement on H&E evaluation. This leads to the noisy training (imperfect ground truth) of deep learning algorithms, resulting in high variability and losing their ability to generalize on unseen datasets. Pan-cytokeratin staining is one of the potential solutions to enhance the agreement, but it is not routinely used to identify tumor buds and can lead to false positives. Therefore, we aim to develop a weakly-supervised deep learning method for tumor bud detection from routine H&E-stained images that does not require strict tissue-level annotations. We also propose <i>Bayesian Multiple Instance Learning</i> (BMIL) that combines multiple annotated regions during the training process to further enhance the generalizability and stability in tumor bud detection. Our dataset consists of 29 colorectal cancer H&E-stained images that contain 115 tumor buds per slide on average. In six-fold cross-validation, our method demonstrated an average precision and recall of 0.94, and 0.86 respectively. These results provide preliminary evidence of the feasibility of our approach in improving the generalizability in tumor budding detection using H&E images while avoiding the need for non-routine immunohistochemical staining methods.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12933 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11095418/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140946681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How accurately can quantitative imaging methods be ranked without ground truth: An upper bound on no-gold-standard evaluation. 在没有基础真理的情况下,定量成像方法的排名有多准确:无金标准评价的上界。
Proceedings of SPIE--the International Society for Optical Engineering Pub Date : 2024-02-01 Epub Date: 2024-03-29 DOI: 10.1117/12.3006888
Yan Liu, Abhinav K Jha
{"title":"How accurately can quantitative imaging methods be ranked without ground truth: An upper bound on no-gold-standard evaluation.","authors":"Yan Liu, Abhinav K Jha","doi":"10.1117/12.3006888","DOIUrl":"10.1117/12.3006888","url":null,"abstract":"<p><p>Objective evaluation of quantitative imaging (QI) methods with patient data, while important, is typically hindered by the lack of gold standards. To address this challenge, no-gold-standard evaluation (NGSE) techniques have been proposed. These techniques have demonstrated efficacy in accurately ranking QI methods without access to gold standards. The development of NGSE methods has raised an important question: how accurately can QI methods be ranked without ground truth. To answer this question, we propose a Cramér-Rao bound (CRB)-based framework that quantifies the upper bound in ranking QI methods without any ground truth. We present the application of this framework in guiding the use of a well-known NGSE technique, namely the regression-without-truth (RWT) technique. Our results show the utility of this framework in quantifying the performance of this NGSE technique for different patient numbers. These results provide motivation towards studying other applications of this upper bound.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12929 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11601990/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142752608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CAFES: Chest X-ray Analysis using Federated Self-supervised Learning for Pediatric COVID-19 Detection. CAFES:利用联合自监督学习进行胸部X光分析,以检测小儿COVID-19。
Proceedings of SPIE--the International Society for Optical Engineering Pub Date : 2024-02-01 Epub Date: 2024-04-03 DOI: 10.1117/12.3008757
Abhijeet Parida, Syed Muhammad Anwar, Malhar P Patel, Mathias Blom, Tal Tiano Einat, Alex Tonetti, Yuval Baror, Ittai Dayan, Marius George Linguraru
{"title":"CAFES: Chest X-ray Analysis using Federated Self-supervised Learning for Pediatric COVID-19 Detection.","authors":"Abhijeet Parida, Syed Muhammad Anwar, Malhar P Patel, Mathias Blom, Tal Tiano Einat, Alex Tonetti, Yuval Baror, Ittai Dayan, Marius George Linguraru","doi":"10.1117/12.3008757","DOIUrl":"10.1117/12.3008757","url":null,"abstract":"<p><p>Chest X-rays (CXRs) play a pivotal role in cost-effective clinical assessment of various heart and lung related conditions. The urgency of COVID-19 diagnosis prompted their use in identifying conditions like lung opacity, pneumonia, and acute respiratory distress syndrome in pediatric patients. We propose an AI-driven solution for binary COVID-19 versus non-COVID-19 classification in pediatric CXRs. We present a Federated Self-Supervised Learning (FSSL) framework to enhance Vision Transformer (ViT) performance for COVID-19 detection in pediatric CXRs. ViT's prowess in vision-related binary classification tasks, combined with self-supervised pre-training on adult CXR data, forms the basis of the FSSL approach. We implement our strategy on the Rhino Health Federated Computing Platform (FCP), which ensures privacy and scalability for distributed data. The chest X-ray analysis using the federated SSL (CAFES) model, utilizes the FSSL-pre-trained ViT weights and demonstrated gains in accurately detecting COVID-19 when compared with a fully supervised model. Our FSSL-pre-trained ViT showed an area under the precision-recall curve (AUPR) of 0.952, which is 0.231 points higher than the fully supervised model for COVID-19 diagnosis using pediatric data. Our contributions include leveraging vision transformers for effective COVID-19 diagnosis from pediatric CXRs, employing distributed federated learning-based self-supervised pre-training on adult data, and improving pediatric COVID-19 diagnosis performance. This privacy-conscious approach aligns with HIPAA guidelines, paving the way for broader medical imaging applications.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12927 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11167651/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141319196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A hyperspectral surgical microscope with super-resolution reconstruction for intraoperative image guidance. 用于术中图像引导的超分辨率重建超光谱手术显微镜。
Proceedings of SPIE--the International Society for Optical Engineering Pub Date : 2024-02-01 Epub Date: 2024-04-02 DOI: 10.1117/12.3008789
Ling Ma, Kelden Pruitt, Baowei Fei
{"title":"A hyperspectral surgical microscope with super-resolution reconstruction for intraoperative image guidance.","authors":"Ling Ma, Kelden Pruitt, Baowei Fei","doi":"10.1117/12.3008789","DOIUrl":"10.1117/12.3008789","url":null,"abstract":"<p><p>Hyperspectral imaging (HSI) is an emerging imaging modality in medical applications, especially for intraoperative image guidance. A surgical microscope improves surgeons' visualization with fine details during surgery. The combination of HSI and surgical microscope can provide a powerful tool for surgical guidance. However, to acquire high-resolution hyperspectral images, the long integration time and large image file size can be a burden for intraoperative applications. Super-resolution reconstruction allows acquisition of low-resolution hyperspectral images and generates high-resolution HSI. In this work, we developed a hyperspectral surgical microscope and employed our unsupervised super-resolution neural network, which generated high-resolution hyperspectral images with fine textures and spectral characteristics of tissues. The proposed method can reduce the acquisition time and save storage space taken up by hyperspectral images without compromising image quality, which will facilitate the adaptation of hyperspectral imaging technology in intraoperative image guidance.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12930 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11093589/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140924175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信