Proceedings of SPIE--the International Society for Optical Engineering最新文献

筛选
英文 中文
Image Texture Based Classification Methods to Mimic Perceptual Models of Search and Localization in Medical Images.
Proceedings of SPIE--the International Society for Optical Engineering Pub Date : 2024-02-01 Epub Date: 2024-03-29 DOI: 10.1117/12.3008844
Diego Andrade, Howard C Gifford, Mini Das
{"title":"Image Texture Based Classification Methods to Mimic Perceptual Models of Search and Localization in Medical Images.","authors":"Diego Andrade, Howard C Gifford, Mini Das","doi":"10.1117/12.3008844","DOIUrl":"10.1117/12.3008844","url":null,"abstract":"<p><p>This study explores the validity of texture-based classification in the early stages of visual search/classification. Initially, we summarize our group's prior findings regarding the prediction of signal detection difficulty based on second-order statistical image texture features in tomographic breast images. Alongside the development of visual search model observers to accurately mimic search and localization in medical images, we continue examining the efficacy of texture-based classification/segmentation methods. We consider both first and second-order features through a combination of texture maps and Gaussian mixture model (GMM). Our aim is to evaluate the advantages of integrating these methods at the early stages of the visual search process, particularly in scenarios where target morphological features may be less apparent or known, as in clinical data. By merging knowledge of imaging physics and texture based GMM, we enhance classification efficiency and refine localization of regions suspected of containing target locations.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12929 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11956787/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143756351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Registration of Longitudinal Spine CTs for Monitoring Lesion Growth. 用于监测病变生长的纵向脊柱 CT 图像注册
Proceedings of SPIE--the International Society for Optical Engineering Pub Date : 2024-02-01 Epub Date: 2024-04-02 DOI: 10.1117/12.3006621
Malika Sanhinova, Nazim Haouchine, Steve D Pieper, William M Wells, Tracy A Balboni, Alexander Spektor, Mai Anh Huynh, Jeffrey P Guenette, Bryan Czajkowski, Sarah Caplan, Patrick Doyle, Heejoo Kang, David B Hackney, Ron N Alkalay
{"title":"Registration of Longitudinal Spine CTs for Monitoring Lesion Growth.","authors":"Malika Sanhinova, Nazim Haouchine, Steve D Pieper, William M Wells, Tracy A Balboni, Alexander Spektor, Mai Anh Huynh, Jeffrey P Guenette, Bryan Czajkowski, Sarah Caplan, Patrick Doyle, Heejoo Kang, David B Hackney, Ron N Alkalay","doi":"10.1117/12.3006621","DOIUrl":"10.1117/12.3006621","url":null,"abstract":"<p><p>Accurate and reliable registration of longitudinal spine images is essential for assessment of disease progression and surgical outcome. Implementing a fully automatic and robust registration is crucial for clinical use, however, it is challenging due to substantial change in shape and appearance due to lesions. In this paper we present a novel method to automatically align longitudinal spine CTs and accurately assess lesion progression. Our method follows a two-step pipeline where vertebrae are first automatically localized, labeled and 3D surfaces are generated using a deep learning model, then longitudinally aligned using a Gaussian mixture model surface registration. We tested our approach on 37 vertebrae, from 5 patients, with baseline CTs and 3, 6, and 12 months follow-ups leading to 111 registrations. Our experiment showed accurate registration with an average Hausdorff distance of 0.65 mm and average Dice score of 0.92.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12926 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11416858/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142302989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Anatomic attention regions via optimal anatomy modeling and recognition for DL-based image segmentation. 通过最佳解剖建模和识别,为基于 DL 的图像分割提供解剖注意区域。
Proceedings of SPIE--the International Society for Optical Engineering Pub Date : 2024-02-01 Epub Date: 2024-04-02 DOI: 10.1117/12.3006771
Yadavendra Nln, J K Udupa, D Odhner, T Liu, Y Tong, D A Torigian
{"title":"Anatomic attention regions via optimal anatomy modeling and recognition for DL-based image segmentation.","authors":"Yadavendra Nln, J K Udupa, D Odhner, T Liu, Y Tong, D A Torigian","doi":"10.1117/12.3006771","DOIUrl":"10.1117/12.3006771","url":null,"abstract":"<p><p>Organ segmentation is a crucial task in various medical imaging applications. Many deep learning models have been developed to do this, but they are slow and require a lot of computational resources. To solve this problem, attention mechanisms are used which can locate important objects of interest within medical images, allowing the model to segment them accurately even when there is noise or artifact. By paying attention to specific anatomical regions, the model becomes better at segmentation. Medical images have unique features in the form of anatomical information, which makes them different from natural images. Unfortunately, most deep learning methods either ignore this information or do not use it effectively and explicitly. Combined natural intelligence with artificial intelligence, known as hybrid intelligence, has shown promising results in medical image segmentation, making models more robust and able to perform well in challenging situations. In this paper, we propose several methods and models to find attention regions in medical images for deep learning-based segmentation via non-deep-learning methods. We developed these models and trained them using hybrid intelligence concepts. To evaluate their performance, we tested the models on unique test data and analyzed metrics including false negatives quotient and false positives quotient. Our findings demonstrate that object shape and layout variations can be explicitly learned to create computational models that are suitable for each anatomic object. This work opens new possibilities for advancements in medical image segmentation and analysis.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12930 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11218901/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141494538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fully automatic mpMRI analysis using deep learning predicts peritumoral glioblastoma infiltration and subsequent recurrence. 利用深度学习进行全自动 mpMRI 分析,预测瘤周胶质母细胞瘤浸润和后续复发。
Proceedings of SPIE--the International Society for Optical Engineering Pub Date : 2024-02-01 Epub Date: 2024-04-02 DOI: 10.1117/12.3001752
Sunwoo Kwak, Hamed Akbari, Jose A Garcia, Suyash Mohan, Christos Davatzikos
{"title":"Fully automatic mpMRI analysis using deep learning predicts peritumoral glioblastoma infiltration and subsequent recurrence.","authors":"Sunwoo Kwak, Hamed Akbari, Jose A Garcia, Suyash Mohan, Christos Davatzikos","doi":"10.1117/12.3001752","DOIUrl":"10.1117/12.3001752","url":null,"abstract":"<p><p>Glioblastoma (GBM) is most aggressive and common adult brain tumor. The standard treatments typically include maximal surgical resection, followed adjuvant radiotherapy and chemotherapy. However, the efficacy of these treatment is often limited, as tumor often infiltrate into the surrounding brain tissue, often extending beyond the radiologically defined margins. This infiltration contributes to the high recurrence rate and poor prognosis associated with GBM patients, necessitating advanced methods for early and accurate detection of tumor infiltration. Despite the great promise traditional supervised machine learning shows in predicting tumor infiltration beyond resectable margins, these methods are heavily reliant on expert-drawn Regions of Interest (ROIs), which are used to construct multi-variate models of different Magnetic Resonance (MR) signal characteristics associated with tumor infiltration. This process is both time consuming and resource intensive. Addressing this limitation, our study proposes a novel integration of fully automatic methods for generating ROIs with deep learning algorithms to create predictive maps of tumor infiltration. This approach uses pre-operative multi-parametric MRI (mpMRI) scans, encompassing T1, T1Gd, T2, T2-FLAIR, and ADC sequences, to fully leverage the knowledge from previously drawn ROIs. Subsequently, a patch based Convolutional Neural Network (CNN) model is trained on these automatically generated ROIs to predict areas of potential tumor infiltration. The performance of this model was evaluated using a leave-one-out cross-validation approach. Generated predictive maps binarized for comparison against post-recurrence mpMRI scans. The model demonstrates robust predictive capability, evidenced by the average cross-validated accuracy of 0.87, specificity of 0.88, and sensitivity of 0.90. Notably, the odds ratio of 8.62 indicates that regions identified as high-risk on the predictive map were significantly more likely to exhibit tumor recurrence than low-risk regions. The proposed method demonstrates that a fully automatic mpMRI analysis using deep learning can successfully predict tumor infiltration in peritumoral region for GBM patients while bypassing the intensive requirement for expert-drawn ROIs.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12926 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11089715/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140917661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ShapeAXI: Shape Analysis Explainability and Interpretability. ShapeAXI:形状分析的可解释性和可解读性
Proceedings of SPIE--the International Society for Optical Engineering Pub Date : 2024-02-01 Epub Date: 2024-04-02 DOI: 10.1117/12.3007053
Juan Carlos Prieto, Felicia Miranda, Marcela Gurgel, Luc Anchling, Nathan Hutin, Selene Barone, Najla Al Turkestani, Aron Aliaga, Marilia Yatabe, Jonas Bianchi, Lucia Cevidanes
{"title":"ShapeAXI: Shape Analysis Explainability and Interpretability.","authors":"Juan Carlos Prieto, Felicia Miranda, Marcela Gurgel, Luc Anchling, Nathan Hutin, Selene Barone, Najla Al Turkestani, Aron Aliaga, Marilia Yatabe, Jonas Bianchi, Lucia Cevidanes","doi":"10.1117/12.3007053","DOIUrl":"10.1117/12.3007053","url":null,"abstract":"<p><p>ShapeAXI represents a cutting-edge framework for shape analysis that leverages a multi-view approach, capturing 3D objects from diverse viewpoints and subsequently analyzing them via 2D Convolutional Neural Networks (CNNs). We implement an automatic N-fold cross-validation process and aggregate the results across all folds. This ensures insightful explainability heat-maps for each class across every shape, enhancing interpretability and contributing to a more nuanced understanding of the underlying phenomena. We demonstrate the versatility of ShapeAXI through two targeted classification experiments. The first experiment categorizes condyles into healthy and degenerative states. The second, more intricate experiment, engages with shapes extracted from CBCT scans of cleft patients, efficiently classifying them into four severity classes. This innovative application not only aligns with existing medical research but also opens new avenues for specialized cleft patient analysis, holding considerable promise for both scientific exploration and clinical practice. The rich insights derived from ShapeAXI's explainability images reinforce existing knowledge and provide a platform for fresh discovery in the fields of condyle assessment and cleft patient severity classification. As a versatile and interpretative tool, ShapeAXI sets a new benchmark in 3D object interpretation and classification, and its groundbreaking approach hopes to make significant contributions to research and practical applications across various domains. ShapeAXI is available in our GitHub repository https://github.com/DCBIA-OrthoLab/ShapeAXI.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12931 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11085013/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140913318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Can patient-specific acquisition protocol improve performance on defect detection task in myocardial perfusion SPECT? 针对患者的采集方案能否提高心肌灌注 SPECT 中缺陷检测任务的性能?
Proceedings of SPIE--the International Society for Optical Engineering Pub Date : 2024-02-01 Epub Date: 2024-03-29 DOI: 10.1117/12.3006924
Nu Ri Choi, Md Ashequr Rahman, Zitong Yu, Barry A Siegel, Abhinav K Jha
{"title":"Can patient-specific acquisition protocol improve performance on defect detection task in myocardial perfusion SPECT?","authors":"Nu Ri Choi, Md Ashequr Rahman, Zitong Yu, Barry A Siegel, Abhinav K Jha","doi":"10.1117/12.3006924","DOIUrl":"10.1117/12.3006924","url":null,"abstract":"<p><p>Myocardial perfusion imaging using single-photon emission computed tomography (SPECT), or myocardial perfusion SPECT (MPS) is a widely used clinical imaging modality for the diagnosis of coronary artery disease. Current clinical protocols for acquiring and reconstructing MPS images are similar for most patients. However, for patients with outlier anatomical characteristics, such as large breasts, images acquired using conventional protocols are often sub-optimal in quality, leading to degraded diagnostic accuracy. Solutions to improve image quality for these patients outside of increased dose or total acquisition time remain challenging. Thus, there is an important need for new methodologies that can help improve the quality of the acquired images for such patients, in terms of the ability to detect myocardial perfusion defects. One approach to improving this performance is adapting the image acquisition protocol specific to each patient. Studies have shown that in MPS, different projection angles usually contain varying amounts of information for the detection task. However, current clinical protocols spend the same time at each projection angle. In this work, we evaluated whether an acquisition protocol that is optimized for each patient could improve performance on the task of defect detection on reconstructed images for patients with outlier anatomical characteristics. For this study, we first designed and implemented a personalized patient-specific protocol-optimization strategy, which we term precision SPECT (PRESPECT). This strategy integrates the theory of ideal observers with the constraints of tomographic reconstruction to optimize the acquisition time for each projection view, such that performance on the task of detecting myocardial perfusion defects is maximized. We performed a clinically realistic simulation study on patients with outlier anatomies on the task of detecting perfusion defects on various realizations of low-dose scans by an anthropomorphic channelized Hotelling observer. Our results show that using PRESPECT led to improved performance on the defect detection task for the considered patients. These results provide evidence that personalization of MPS acquisition protocol has the potential to improve defect detection performance on reconstructed images by anthropomorphic observers for patients with outlier anatomical characteristics. Thus, our findings motivate further research to design optimal patient-specific acquisition and reconstruction protocols for MPS, as well as developing similar approaches for other medical imaging modalities.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12929 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11566828/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142649933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Speech Motion Anomaly Detection via Cross-Modal Translation of 4D Motion Fields from Tagged MRI. 通过对标记核磁共振成像中的四维运动场进行跨模态转换,实现语音运动异常检测。
Proceedings of SPIE--the International Society for Optical Engineering Pub Date : 2024-02-01 Epub Date: 2024-05-01 DOI: 10.1117/12.3006874
Xiaofeng Liu, Fangxu Xing, Jiachen Zhuo, Maureen Stone, Jerry L Prince, Georges El Fakhri, Jonghye Woo
{"title":"Speech Motion Anomaly Detection via Cross-Modal Translation of 4D Motion Fields from Tagged MRI.","authors":"Xiaofeng Liu, Fangxu Xing, Jiachen Zhuo, Maureen Stone, Jerry L Prince, Georges El Fakhri, Jonghye Woo","doi":"10.1117/12.3006874","DOIUrl":"10.1117/12.3006874","url":null,"abstract":"<p><p>Understanding the relationship between tongue motion patterns during speech and their resulting speech acoustic outcomes-i.e., articulatory-acoustic relation-is of great importance in assessing speech quality and developing innovative treatment and rehabilitative strategies. This is especially important when evaluating and detecting abnormal articulatory features in patients with speech-related disorders. In this work, we aim to develop a framework for detecting speech motion anomalies in conjunction with their corresponding speech acoustics. This is achieved through the use of a deep cross-modal translator trained on data from healthy individuals only, which bridges the gap between 4D motion fields obtained from tagged MRI and 2D spectrograms derived from speech acoustic data. The trained translator is used as an anomaly detector, by measuring the spectrogram reconstruction quality on healthy individuals or patients. In particular, the cross-modal translator is likely to yield limited generalization capabilities on patient data, which includes unseen out-of-distribution patterns and demonstrates subpar performance, when compared with healthy individuals. A one-class SVM is then used to distinguish the spectrograms of healthy individuals from those of patients. To validate our framework, we collected a total of 39 paired tagged MRI and speech waveforms, consisting of data from 36 healthy individuals and 3 tongue cancer patients. We used both 3D convolutional and transformer-based deep translation models, training them on the healthy training set and then applying them to both the healthy and patient testing sets. Our framework demonstrates a capability to detect abnormal patient data, thereby illustrating its potential in enhancing the understanding of the articulatory-acoustic relation for both healthy individuals and patients.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12926 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11377028/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142141877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Demonstration of 1000 fps High-Speed Angiography (HSA) in Pre-Clinical In-vivo Rabbit Aneurysm Models During Flow-Diverter Treatment. 临床前体内兔动脉瘤模型在分流治疗过程中展示 1000 fps 高速血管造影 (HSA)。
Proceedings of SPIE--the International Society for Optical Engineering Pub Date : 2024-02-01 Epub Date: 2024-04-02 DOI: 10.1117/12.3005678
E A Vanderbilt, C Koenighsknecht, D Pionessa, C N Ionita, D R Bednarek, S Rudin, S V Setlur Nagesh
{"title":"Demonstration of 1000 fps High-Speed Angiography (HSA) in Pre-Clinical In-vivo Rabbit Aneurysm Models During Flow-Diverter Treatment.","authors":"E A Vanderbilt, C Koenighsknecht, D Pionessa, C N Ionita, D R Bednarek, S Rudin, S V Setlur Nagesh","doi":"10.1117/12.3005678","DOIUrl":"10.1117/12.3005678","url":null,"abstract":"<p><p>High Speed Angiography (HSA) at 1000 fps is a novel interventional-imaging technique that was previously used to visualize changes in vascular flow details before and after flow-diverter treatment of cerebral aneurysms in in-vitro 3D printed models.<sup>1</sup> In this first pre-clinical work, we demonstrate the use of the HSA technique during flow-diverter treatment of in-vivo rabbit aneurysm models. An aneurysm was created in the right common carotid artery of each of two rabbits using previously published elastase aneurysm-creation methods.<sup>2</sup> A 5 French catheter was inserted into the femoral artery and moved to the aneurysm location under the guidance of standard-speed 10 fps Flat Panel Detector (FPD) fluoroscopy. Following this, a flow diverter stent was placed in the parent vessel covering the aneurysm neck and diverting the flow away from the aneurysm. HSA was performed before and after placement of the flow diverter using a 1000 fps CdTe photon-counting detector (Aries, Varex). The detector was mounted on a motorized changer and was used with a commercial x-ray c-arm system (Fig. 1). During these procedures Omnipaque iodinated contrast was injected into the aneurysm area using a computer-controlled injector at a steady rate of 50 ml/min or 70 ml/min depending on the rabbit to visualize blood flow detail. The contrast injection and x-ray image acquisition were synchronized manually. The x-ray image acquisition was for a duration of 1 second, from which 300 ms was used for velocity analysis during systole. Detailed differences in flow patterns in the region of interest (ROI) between pre and post flow-diverter deployment were visualized at the high frame rates. The Optical Flow (OF) method for velocity calculation was performed upon the acquired 1000 fps HSA image sequences to provide quantitative evaluation of flow.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12930 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11559516/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142633815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Investigating Causal Genetic Effects on Overall Survival of Glioblastoma Patients using Normalizing Flow and Structural Causal Model. 利用归一化流和结构因果模型研究基因对胶质母细胞瘤患者总生存期的因果影响
Proceedings of SPIE--the International Society for Optical Engineering Pub Date : 2024-02-01 Epub Date: 2024-04-03 DOI: 10.1117/12.3005434
Fanyang Yu, Rongguang Wang, Pratik Chaudhari, Christos Davatzikos
{"title":"Investigating Causal Genetic Effects on Overall Survival of Glioblastoma Patients using Normalizing Flow and Structural Causal Model.","authors":"Fanyang Yu, Rongguang Wang, Pratik Chaudhari, Christos Davatzikos","doi":"10.1117/12.3005434","DOIUrl":"10.1117/12.3005434","url":null,"abstract":"<p><p>Glioblastoma (GBM) is the most common and aggressive brain tumor with short overall survival (OS) of about 15 months. Understanding the causal factors affecting the patient survival is crucial for disease prognosis and treatment planning. Although previous efforts on survival prediction using multi-omics data has yielded useful predictive models, the causation of the correlated genetic risk factors has not been addressed. Recent advances in causal deep learning models enable the study of causality from complex dataset. In this paper, we leverage the recently proposed structural causal model (SCM) with normalizing flows parameterized by deep networks to perform the counterfactual query to investigate the causal relationship between gene mutation and OS with the presence of other confounders including sex, age and radiomics features. The query amounts to the question that what the survival days will be if the gene mutation status has been changed, i.e., from mutant to non-mutant and vice versa. The trained causal model will infer the counterfactual outcome given the intervention on specific gene mutation. We apply multivariate Cox-PH model to find the genes associated with survival, and investigate the causal genetic effect by comparing the original and counterfactual survival days in a bi-directional fashion. Particularly, the following two scenarios are considered: (1) intervention on a specific gene with non-mutant status to generate the counterfactual survival days as if the gene is mutant, with which the original survival days of the subjects with that mutant gene will be compared; (2) intervention on the gene with mutant status and perform the comparison with survival days of subjects with that non-mutant gene. Our experimental results show that no causation of two correlated genes (NF1, RB1) was revealed in the cohort (n=181), while their genetic effects on OS in terms of prolonging or shortening are generally in accordance with clinical findings.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12927 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11034818/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140861819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GAN-Based Motion Artifact Correction of 3D MR Volumes Using an Image-to-Image Translation Algorithm. 使用图像到图像平移算法对基于 GAN 的三维 MR 卷进行运动伪影校正。
Proceedings of SPIE--the International Society for Optical Engineering Pub Date : 2024-02-01 Epub Date: 2024-04-02 DOI: 10.1117/12.3007743
Vishnu Vardhan Reddy Kanamata Reddy, Chandan Ganesh Bangalore Yogananda, Nghi C D Truong, Ananth J Madhuranthakam, Joseph A Maldjian, Baowei Fei
{"title":"GAN-Based Motion Artifact Correction of 3D MR Volumes Using an Image-to-Image Translation Algorithm.","authors":"Vishnu Vardhan Reddy Kanamata Reddy, Chandan Ganesh Bangalore Yogananda, Nghi C D Truong, Ananth J Madhuranthakam, Joseph A Maldjian, Baowei Fei","doi":"10.1117/12.3007743","DOIUrl":"10.1117/12.3007743","url":null,"abstract":"<p><p>The quality of brain MRI volumes is often compromised by motion artifacts arising from intricate respiratory patterns and involuntary head movements, manifesting as blurring and ghosting that markedly degrade imaging quality. In this study, we introduce an innovative approach employing a 3D deep learning framework to restore brain MR volumes afflicted by motion artifacts. The framework integrates a densely connected 3D U-net architecture augmented by generative adversarial network (GAN)-informed training with a novel volumetric reconstruction loss function tailored to 3D GAN to enhance the quality of the volumes. Our methodology is substantiated through comprehensive experimentation involving a diverse set of motion artifact-affected MR volumes. The generated high-quality MR volumes have similar volumetric signatures comparable to motion-free MR volumes after motion correction. This underscores the significant potential of harnessing this 3D deep learning system to aid in the rectification of motion artifacts in brain MR volumes, highlighting a promising avenue for advanced clinical applications.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12930 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11262355/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141749931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信