Proceedings of SPIE--the International Society for Optical Engineering最新文献

筛选
英文 中文
A systematic assessment of photon-counting CT for bone mineral density and microarchitecture quantifications. 光子计数 CT 用于骨矿物质密度和微结构量化的系统性评估。
Cindy McCabe, Thomas J Sauer, Mojtaba Zarei, W Paul Segars, Ehsan Samei, Ehsan Abadi
{"title":"A systematic assessment of photon-counting CT for bone mineral density and microarchitecture quantifications.","authors":"Cindy McCabe, Thomas J Sauer, Mojtaba Zarei, W Paul Segars, Ehsan Samei, Ehsan Abadi","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Photon-counting CT (PCCT) is an emerging imaging technology with potential improvements in quantification and rendition of micro-structures due to its smaller detector sizes. The aim of this study was to assess the performance of a new PCCT scanner (NAEOTOM Alpha, Siemens) in quantifying clinically relevant bone imaging biomarkers for characterization of common bone diseases. We evaluated the ability of PCCT in quantifying microarchitecture in bones compared to conventional energy-integrating CT. The quantifications were done through virtual imaging trials, using a 50 percentile BMI male virtual patient, with a detailed model of trabecular bone with varied bone densities in the lumbar spine. The virtual patient was imaged using a validated CT simulator (DukeSim) at CTDI<sub>vol</sub> of 20 and 40 mGy for three scan modes: ultra-high-resolution PCCT (UHR-PCCT), high-resolution PCCT (HR-PCCT), and a conventional energy-integrating CT (EICT) (FORCE, Siemens). Further, each scan mode was reconstructed with varying parameters to evaluate their effect on quantification. Bone mineral density (BMD), trabecular volume to total bone volume (BV/TV), and radiomics texture features were calculated in each vertebra. The most accurate BMD measurements relative to the ground truth were UHR-PCCT images (error: 3.3% ± 1.5%), compared to HR-PCCT (error: 5.3% ± 2.0%) and EICT (error: 7.1% ± 2.0%). UHR-PCCT images outperformed EICT and HR-PCCT. In BV/TV quantifications, UHR-PCCT (errors of 29.7% ± 11.8%) outperformed HR-PCCT (error: 80.6% ± 31.4%) and EICT (error: 67.3% ± 64.3). UHR-PCCT and HR-PCCT texture features were sensitive to anatomical changes using the sharpest kernel. Conversely, the texture radiomics showed no clear trend to reflect the progression of the disease in EICT. This study demonstrated the potential utility of PCCT technology in improved performance of bone quantifications leading to more accurate characterization of bone diseases.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12463 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10142096/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9763090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Harmonization of repetition time and scanner effects on estimates of brain hemodynamic response function. 重复时间和扫描仪对脑血流动力学反应函数估计的影响的协调。
Proceedings of SPIE--the International Society for Optical Engineering Pub Date : 2023-02-01 Epub Date: 2023-04-03 DOI: 10.1117/12.2653903
Lucie Dole, Kurt G Schilling, Hakmook Kang, John C Gore, Bennett A Landman
{"title":"Harmonization of repetition time and scanner effects on estimates of brain hemodynamic response function.","authors":"Lucie Dole, Kurt G Schilling, Hakmook Kang, John C Gore, Bennett A Landman","doi":"10.1117/12.2653903","DOIUrl":"10.1117/12.2653903","url":null,"abstract":"<p><p>Multisite contributions are essential to improve the reliability and statistical power of imaging studies but introduce a complexity because of different acquisition protocols and scanners. The hemodynamic response function (HRF) is the transform that relates neural activity to the measured blood oxygenation level-dependent (BOLD) signal in MRI and contains information about the latency, amplitude, and duration of neuronal activations. Acquisition variabilities, without adding harmonization techniques, can severely limit our ability to characterize spatial effects. To address this problem, we propose to study and remove variabilities of the sampling rate and scanners on estimates of the HRF. We computed the HRF using a blind deconvolution method in 547 subjects from the Alzheimer's Disease Neuroimaging Initiative (ADNI) across 62 sites and 18 scanners. The approach consists of studying the changes of the response according to repetition times (TR) and scanner models. We applied ComBAT, a statistical multi-site harmonization technique, to evaluate and reduce the scanner and repetition time effects and used the Wilcoxon rank sum test to assess the performance of the harmonization. Results show high scanner and repetition time variabilities (|<i>d</i>| ≥ 0.38, <i>p</i> = 4.5 × 10<sup>-5</sup>) across features, indicating that using harmonization is crucial in multi-site studies. ComBAT successfully removes the sampling effects and reduces the variance between scanners for 7 out of 10 of the HRF features (|<i>d</i>| ≤ 0.05, <i>p</i> = 0.0052). Scanners effects have been characterized on multi-site datasets, but the repetition time impact has been less studied. We showed that the use of different values of repetition time leads to changes in HRF behavior. Regression modeling changes in the HRF on the harmonized data are not significant (<i>p</i> = 0.0401) which does not allow to conclude how HRF changes with aging.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12464 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10353821/pdf/nihms-1858268.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10225975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated Reference Kidney Histomorphometry using a Panoptic Segmentation Neural Network Correlates to Patient Demographics and Creatinine. 使用泛视分割神经网络的自动参考肾脏组织形态测量与患者人口学和肌酐相关。
Proceedings of SPIE--the International Society for Optical Engineering Pub Date : 2023-02-01 Epub Date: 2023-04-06 DOI: 10.1117/12.2655288
Brandon Ginley, Nicholas Lucarelli, Jarcy Zee, Sanjay Jain, Seung Seok Han, Luis Rodrigues, Michelle L Wong, Kuang-Yu Jen, Pinaki Sarder
{"title":"Automated Reference Kidney Histomorphometry using a Panoptic Segmentation Neural Network Correlates to Patient Demographics and Creatinine.","authors":"Brandon Ginley,&nbsp;Nicholas Lucarelli,&nbsp;Jarcy Zee,&nbsp;Sanjay Jain,&nbsp;Seung Seok Han,&nbsp;Luis Rodrigues,&nbsp;Michelle L Wong,&nbsp;Kuang-Yu Jen,&nbsp;Pinaki Sarder","doi":"10.1117/12.2655288","DOIUrl":"10.1117/12.2655288","url":null,"abstract":"<p><p>Reference histomorphometric data of healthy human kidneys are lacking due to laborious quantitation requirements. We leveraged deep learning to investigate the relationship of histomorphometry with patient age, sex, and serum creatinine in a multinational set of reference kidney tissue sections. A panoptic segmentation neural network was developed and used to segment viable and sclerotic glomeruli, cortical and medullary interstitia, tubules, and arteries/arterioles in digitized images of 79 periodic acid-Schiff (PAS)-stained human nephrectomy sections showing minimal pathologic changes. Simple morphometrics (e.g., area, radius, density) were measured from the segmented classes. Regression analysis was used to determine the relationship of histomorphometric parameters with age, sex, and serum creatinine. The model achieved high segmentation performance for all test compartments. We found that the size and density of nephrons, arteries/arterioles, and the baseline level of interstitium vary significantly among healthy humans, with potentially large differences between subjects from different geographic locations. Nephron size in any region of the kidney was significantly dependent on patient creatinine. Slight differences in renal vasculature and interstitium were observed between sexes. Finally, glomerulosclerosis percentage increased and cortical density of arteries/arterioles decreased as a function of age. We show that precise measurements of kidney histomorphometric parameters can be automated. Even in reference kidney tissue sections with minimal pathologic changes, several histomorphometric parameters demonstrated significant correlation to patient demographics and serum creatinine. These robust tools support the feasibility of deep learning to increase efficiency and rigor in histomorphometric analysis and pave the way for future large-scale studies.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12471 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10563118/pdf/nihms-1935802.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41222976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Contrast- and noise-dependent spatial resolution measurement for deep convolutional neural network-based noise reduction in CT using patient data. 基于深度卷积神经网络的CT降噪的对比度和噪声相关空间分辨率测量。
Proceedings of SPIE--the International Society for Optical Engineering Pub Date : 2023-02-01 DOI: 10.1117/12.2654972
Zhongxing Zhou, Hao Gong, Scott Hsieh, Cynthia H McCollough, Lifeng Yu
{"title":"Contrast- and noise-dependent spatial resolution measurement for deep convolutional neural network-based noise reduction in CT using patient data.","authors":"Zhongxing Zhou,&nbsp;Hao Gong,&nbsp;Scott Hsieh,&nbsp;Cynthia H McCollough,&nbsp;Lifeng Yu","doi":"10.1117/12.2654972","DOIUrl":"https://doi.org/10.1117/12.2654972","url":null,"abstract":"<p><p>Deep convolutional neural network (DCNN)-based noise reduction methods have been increasingly deployed in clinical CT. Accurate assessment of their spatial resolution properties is required. Spatial resolution is typically measured on physical phantoms, which may not represent the true performance of DCNN in patients as it is typically trained and tested with patient images and the generalizability of DNN to physical phantoms is questionable. In this work, we proposed a patient-data-based framework to measure the spatial resolution of DCNN methods, which involves lesion- and noise-insertion in projection domain, lesion ensemble averaging, and modulation transfer function measurement using an oversampled edge spread function from the cylindrical lesion signal. The impact of varying lesion contrast, dose levels, and CNN denoising strengths were investigated for a ResNet-based DCNN model trained using patient images. The spatial resolution degradation of DCNN reconstructions becomes more severe as the contrast or radiation dose decreased, or DCNN denoising strength increased. The measured 50%/10% MTF spatial frequencies of DCNN with highest denoising strength were (-500 HU:0.36/0.72 mm<sup>-1</sup>; -100 HU:0.32/0.65 mm<sup>-1</sup>; -50 HU:0.27/0.53 mm<sup>-1</sup>; -20 HU:0.18/0.36 mm<sup>-1</sup>; -10 HU:0.15/0.30 mm<sup>-1</sup>), while the 50%/10% MTF values of FBP were almost kept constant of 0.38/0.76 mm<sup>-1</sup>.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12463 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10187613/pdf/nihms-1874870.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9486512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-energy CT material decomposition using Bayesian deep convolutional neural network with explicit penalty of uncertainty and bias. 基于贝叶斯深度卷积神经网络的多能CT材料分解,具有明确的不确定性和偏差惩罚。
Proceedings of SPIE--the International Society for Optical Engineering Pub Date : 2023-02-01 DOI: 10.1117/12.2654317
Hao Gong, Shuai Leng, Francis Baffour, Lifeng Yu, Joel G Fletcher, Cynthia H McCollough
{"title":"Multi-energy CT material decomposition using Bayesian deep convolutional neural network with explicit penalty of uncertainty and bias.","authors":"Hao Gong,&nbsp;Shuai Leng,&nbsp;Francis Baffour,&nbsp;Lifeng Yu,&nbsp;Joel G Fletcher,&nbsp;Cynthia H McCollough","doi":"10.1117/12.2654317","DOIUrl":"https://doi.org/10.1117/12.2654317","url":null,"abstract":"<p><p>Convolutional neural network (CNN)-based material decomposition has the potential to improve image quality (visual appearance) and quantitative accuracy of material maps. Most methods use deterministic CNNs with mean-square-error loss to provide point-estimates of mass densities. Point estimates can be over-confident as the reliability of CNNs is frequently compromised by bias and two major uncertainties - data and model uncertainties originating from noise in inputs and train-test data dissimilarity, respectively. Also, mean-square-error lacks explicit control of uncertainty and bias. To tackle these problems, a Bayesian dual-task CNN (BDT-CNN) with explicit penalization of uncertainty and bias was developed. It is a probabilistic CNN that concurrently conducts material classification and quantification and allows for pixel-wise modeling of bias, data uncertainty, and model uncertainty. CNN was trained with images of physical and simulated tissue-mimicking inserts at varying mass densities. Hydroxyapatite (nominal density 400mg/cc) and blood (nominal density 1095mg/cc) inserts were placed in different-sized body phantoms (30 - 45cm) and used to evaluate mean-absolute-bias (MAB) in predicted mass densities across different images at routine- and half-routine-dose. Patient CT exams were collected to assess generalizability of BDT-CNN in the presence of anatomical background. Noise insertion was used to simulate patient exams at half- and quarter-routine-dose. The deterministic dual-task CNN was used as baseline. In phantoms, BDT-CNN improved consistency of insert delineation, especially edges, and reduced overall bias (average MAB for hydroxyapatite: BDT-CNN 5.4mgHA/cc, baseline 11.0mgHA/cc and blood: BDT-CNN 8.9mgBlood/cc, baseline 14.0mgBlood/cc). In patient images, BDT-CNN improved detail preservation, lesion conspicuity, and structural consistency across different dose levels.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12463 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10099768/pdf/nihms-1880765.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9323174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Patient-specific uncertainty and bias quantification of non-transparent convolutional neural network model through knowledge distillation and Bayesian deep learning. 基于知识蒸馏和贝叶斯深度学习的非透明卷积神经网络模型患者特异性不确定性和偏差量化。
Proceedings of SPIE--the International Society for Optical Engineering Pub Date : 2023-02-01 DOI: 10.1117/12.2654318
Hao Gong, Lifeng Yu, Shuai Leng, Scott S Hsieh, Joel G Fletcher, Cynthia H McCollough
{"title":"Patient-specific uncertainty and bias quantification of non-transparent convolutional neural network model through knowledge distillation and Bayesian deep learning.","authors":"Hao Gong,&nbsp;Lifeng Yu,&nbsp;Shuai Leng,&nbsp;Scott S Hsieh,&nbsp;Joel G Fletcher,&nbsp;Cynthia H McCollough","doi":"10.1117/12.2654318","DOIUrl":"https://doi.org/10.1117/12.2654318","url":null,"abstract":"<p><p>Assessing the reliability of convolutional neural network (CNN)-based CT imaging techniques is critical for reliable deployment in practice. Some evaluation methods exist but require full access to target CNN architecture and training data, something not available for proprietary or commercial algorithms. Moreover, there is a lack of systematic evaluation methods. To address these issues, we propose a patient-specific uncertainty and bias quantification (UNIQ) method that integrates knowledge distillation and Bayesian deep learning. Knowledge distillation creates a transparent CNN (\"Student CNN\") to approximate the target non-transparent CNN (\"Teacher CNN\"). Student CNN is built as a Bayesian-deep-learning-based probabilistic CNN that, for each input, always generates statistical distribution of the corresponding outputs, and characterizes predictive mean and two major uncertainties - data and model uncertainty. UNIQ was evaluated using a low-dose CT denoising task. Patient and phantom scans with routine-dose and synthetic quarter-dose were used to create training, validation, and testing sets. To demonstrate, Unet and Resnet were used as backbones of Teacher CNN and Student CNN respectively and were trained using independent training sets. Student Resnet was qualitatively and quantitatively evaluated. The pixel-wise predictive mean, data uncertainty, and model uncertainty from Student Resnet were very similar to the counterparts from Teacher Unet (mean-absolute-error: predictive mean 1.5HU, data uncertainty 1.8HU, model uncertainty 1.3HU; mean 2D correlation coefficient: total uncertainty 0.90, data uncertainty 0.86, model uncertainty 0.83). The proposed UNIQ can potentially systematically characterize the reliability of non-transparent CNN models used in CT.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12463 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10100102/pdf/nihms-1878654.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9323180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Estimating normal metabolic activity for disease quantification via PET/CT images. 通过PET/CT图像估计疾病量化的正常代谢活动。
Proceedings of SPIE--the International Society for Optical Engineering Pub Date : 2023-02-01 DOI: 10.1117/12.2654882
Jieyu Li, Jayaram K Udupa, Yubing Tong, Drew A Torigian
{"title":"Estimating normal metabolic activity for disease quantification via PET/CT images.","authors":"Jieyu Li,&nbsp;Jayaram K Udupa,&nbsp;Yubing Tong,&nbsp;Drew A Torigian","doi":"10.1117/12.2654882","DOIUrl":"https://doi.org/10.1117/12.2654882","url":null,"abstract":"<p><p>In this paper, we propose a novel pipeline for conducting disease quantification in positron emission tomography/computed tomography (PET/CT) images on anatomically pre-defined objects. The pipeline is composed of standardized uptake value (SUV) standardization, object segmentation, and disease quantification (DQ). DQ is conducted on non-linearly standardized PET images and masks of target objects derived from CT images. Total lesion burden (TLB) is quantified by estimating normal metabolic activity (TMA<sub>n</sub>) in the object and subtracting this entity from total metabolic activity (TMA) of the object, thereby measuring the overall disease quantity of the region of interest without the necessity of explicitly segmenting individual lesions. TMA<sub>n</sub> is calculated with object-specific SUV distribution models. In the modeling stage, SUV models are constructed from a set of PET/CT images obtained from normal subjects with manually delineated masks of target objects. Two ways of SUV modeling are explored, where the mean of mean values of the modeling samples is utilized as a consistent normality value in the hard strategy, and the likelihood representing normal tissue is determined from the SUV distribution (histogram) for each SUV value in the fuzzy strategy. The evaluation experiments are conducted on a separate clinical dataset of normal subjects and a phantom dataset with lesions. The ratio of absolute TLB to TMA is taken as the metric, alleviating the individual difference of volume sizes and uptake levels. The results show that the ratios in normal objects are close to 0 and the ratios for lesions approach 1, demonstrating that contributions on TLB are minimal from the normal tissue and mainly from the lesion tissue.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12468 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10228705/pdf/nihms-1903105.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9553344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Batch size: go big or go home? Counterintuitive improvement in medical autoencoders with smaller batch size. 批量大小:要大还是要小?医疗自动编码器在较小批量下的反直觉改进
Proceedings of SPIE--the International Society for Optical Engineering Pub Date : 2023-02-01 Epub Date: 2023-04-03 DOI: 10.1117/12.2653643
Cailey I Kerley, Leon Y Cai, Yucheng Tang, Lori L Beason-Held, Susan M Resnick, Laurie E Cutting, Bennett A Landman
{"title":"Batch size: go big or go home? Counterintuitive improvement in medical autoencoders with smaller batch size.","authors":"Cailey I Kerley, Leon Y Cai, Yucheng Tang, Lori L Beason-Held, Susan M Resnick, Laurie E Cutting, Bennett A Landman","doi":"10.1117/12.2653643","DOIUrl":"10.1117/12.2653643","url":null,"abstract":"<p><p>Batch size is a key hyperparameter in training deep learning models. Conventional wisdom suggests larger batches produce improved model performance. Here we present evidence to the contrary, particularly when using autoencoders to derive meaningful latent spaces from data with spatially global similarities and local differences, such as electronic health records (EHR) and medical imaging. We investigate batch size effects in both EHR data from the Baltimore Longitudinal Study of Aging and medical imaging data from the multimodal brain tumor segmentation (BraTS) challenge. We train fully connected and convolutional autoencoders to compress the EHR and imaging input spaces, respectively, into 32-dimensional latent spaces via reconstruction losses for various batch sizes between 1 and 100. Under the same hyperparameter configurations, smaller batches improve loss performance for both datasets. Additionally, latent spaces derived by autoencoders with smaller batches capture more biologically meaningful information. Qualitatively, we visualize 2-dimensional projections of the latent spaces and find that with smaller batches the EHR network better separates the sex of the individuals, and the imaging network better captures the right-left laterality of tumors. Quantitatively, the analogous sex classification and laterality regressions using the latent spaces demonstrate statistically significant improvements in performance at smaller batch sizes. Finally, we find improved individual variation locally in visualizations of representative data reconstructions at lower batch sizes. Taken together, these results suggest that smaller batch sizes should be considered when designing autoencoders to extract meaningful latent spaces among EHR and medical imaging data driven by global similarities and local variation.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12464 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10353832/pdf/nihms-1858227.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10225971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Time-distance vision transformers in lung cancer diagnosis from longitudinal computed tomography. 纵向计算机断层扫描诊断肺癌中的时距视觉变换器
Proceedings of SPIE--the International Society for Optical Engineering Pub Date : 2023-02-01 Epub Date: 2023-04-03 DOI: 10.1117/12.2653911
Thomas Z Li, Kaiwen Xu, Riqiang Gao, Yucheng Tang, Thomas A Lasko, Fabien Maldonado, Kim L Sandler, Bennett A Landman
{"title":"Time-distance vision transformers in lung cancer diagnosis from longitudinal computed tomography.","authors":"Thomas Z Li, Kaiwen Xu, Riqiang Gao, Yucheng Tang, Thomas A Lasko, Fabien Maldonado, Kim L Sandler, Bennett A Landman","doi":"10.1117/12.2653911","DOIUrl":"10.1117/12.2653911","url":null,"abstract":"<p><p>Features learned from single radiologic images are unable to provide information about whether and how much a lesion may be changing over time. Time-dependent features computed from repeated images can capture those changes and help identify malignant lesions by their temporal behavior. However, longitudinal medical imaging presents the unique challenge of sparse, irregular time intervals in data acquisition. While self-attention has been shown to be a versatile and efficient learning mechanism for time series and natural images, its potential for interpreting temporal distance between sparse, irregularly sampled spatial features has not been explored. In this work, we propose two interpretations of a time-distance vision transformer (ViT) by using (1) vector embeddings of continuous time and (2) a temporal emphasis model to scale self-attention weights. The two algorithms are evaluated based on benign versus malignant lung cancer discrimination of synthetic pulmonary nodules and lung screening computed tomography studies from the National Lung Screening Trial (NLST). Experiments evaluating the time-distance ViTs on synthetic nodules show a fundamental improvement in classifying irregularly sampled longitudinal images when compared to standard ViTs. In cross-validation on screening chest CTs from the NLST, our methods (0.785 and 0.786 AUC respectively) significantly outperform a cross-sectional approach (0.734 AUC) and match the discriminative performance of the leading longitudinal medical imaging algorithm (0.779 AUC) on benign versus malignant classification. This work represents the first self-attention-based framework for classifying longitudinal medical images. Our code is available at https://github.com/tom1193/time-distance-transformer.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12464 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10353776/pdf/nihms-1858277.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9849774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep whole brain segmentation of 7T structural MRI. 7T 结构磁共振成像的深层全脑分割。
Proceedings of SPIE--the International Society for Optical Engineering Pub Date : 2023-02-01 Epub Date: 2023-04-03 DOI: 10.1117/12.2654108
Karthik Ramadass, Xin Yu, Leon Y Cai, Yucheng Tang, Shunxing Bao, Cailey Kerley, Micah D'Archangel, Laura A Barquero, Allen T Newton, Isabel Gauthier, Rankin Williams McGugin, Benoit M Dawant, Laurie E Cutting, Yuankai Huo, Bennett A Landman
{"title":"Deep whole brain segmentation of 7T structural MRI.","authors":"Karthik Ramadass, Xin Yu, Leon Y Cai, Yucheng Tang, Shunxing Bao, Cailey Kerley, Micah D'Archangel, Laura A Barquero, Allen T Newton, Isabel Gauthier, Rankin Williams McGugin, Benoit M Dawant, Laurie E Cutting, Yuankai Huo, Bennett A Landman","doi":"10.1117/12.2654108","DOIUrl":"10.1117/12.2654108","url":null,"abstract":"<p><p>7T magnetic resonance imaging (MRI) has the potential to drive our understanding of human brain function through new contrast and enhanced resolution. Whole brain segmentation is a key neuroimaging technique that allows for region-by-region analysis of the brain. Segmentation is also an important preliminary step that provides spatial and volumetric information for running other neuroimaging pipelines. Spatially localized atlas network tiles (SLANT) is a popular 3D convolutional neural network (CNN) tool that breaks the whole brain segmentation task into localized sub-tasks. Each sub-task involves a specific spatial location handled by an independent 3D convolutional network to provide high resolution whole brain segmentation results. SLANT has been widely used to generate whole brain segmentations from structural scans acquired on 3T MRI. However, the use of SLANT for whole brain segmentation from structural 7T MRI scans has not been successful due to the inhomogeneous image contrast usually seen across the brain in 7T MRI. For instance, we demonstrate the mean percent difference of SLANT label volumes between a 3T scan-rescan is approximately 1.73%, whereas its 3T-7T scan-rescan counterpart has higher differences around 15.13%. Our approach to address this problem is to register the whole brain segmentation performed on 3T MRI to 7T MRI and use this information to finetune SLANT for structural 7T MRI. With the finetuned SLANT pipeline, we observe a lower mean relative difference in the label volumes of ~8.43% acquired from structural 7T MRI data. Dice similarity coefficient between SLANT segmentation on the 3T MRI scan and the after finetuning SLANT segmentation on the 7T MRI increased from 0.79 to 0.83 with p<0.01. These results suggest finetuning of SLANT is a viable solution for improving whole brain segmentation on high resolution 7T structural imaging.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12464 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10139750/pdf/nihms-1858231.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9767944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信