Computerized Medical Imaging and Graphics最新文献

筛选
英文 中文
Progress and trends in neurological disorders research based on deep learning 基于深度学习的神经系统疾病研究进展与趋势
IF 5.7 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2024-05-25 DOI: 10.1016/j.compmedimag.2024.102400
Muhammad Shahid Iqbal , Md Belal Bin Heyat , Saba Parveen , Mohd Ammar Bin Hayat , Mohamad Roshanzamir , Roohallah Alizadehsani , Faijan Akhtar , Eram Sayeed , Sadiq Hussain , Hany S. Hussein , Mohamad Sawan
{"title":"Progress and trends in neurological disorders research based on deep learning","authors":"Muhammad Shahid Iqbal ,&nbsp;Md Belal Bin Heyat ,&nbsp;Saba Parveen ,&nbsp;Mohd Ammar Bin Hayat ,&nbsp;Mohamad Roshanzamir ,&nbsp;Roohallah Alizadehsani ,&nbsp;Faijan Akhtar ,&nbsp;Eram Sayeed ,&nbsp;Sadiq Hussain ,&nbsp;Hany S. Hussein ,&nbsp;Mohamad Sawan","doi":"10.1016/j.compmedimag.2024.102400","DOIUrl":"https://doi.org/10.1016/j.compmedimag.2024.102400","url":null,"abstract":"<div><p>In recent years, deep learning (DL) has emerged as a powerful tool in clinical imaging, offering unprecedented opportunities for the diagnosis and treatment of neurological disorders (NDs). This comprehensive review explores the multifaceted role of DL techniques in leveraging vast datasets to advance our understanding of NDs and improve clinical outcomes. Beginning with a systematic literature review, we delve into the utilization of DL, particularly focusing on multimodal neuroimaging data analysis—a domain that has witnessed rapid progress and garnered significant scientific interest. Our study categorizes and critically analyses numerous DL models, including Convolutional Neural Networks (CNNs), LSTM-CNN, GAN, and VGG, to understand their performance across different types of Neurology Diseases. Through particular analysis, we identify key benchmarks and datasets utilized in training and testing DL models, shedding light on the challenges and opportunities in clinical neuroimaging research. Moreover, we discuss the effectiveness of DL in real-world clinical scenarios, emphasizing its potential to revolutionize ND diagnosis and therapy. By synthesizing existing literature and describing future directions, this review not only provides insights into the current state of DL applications in ND analysis but also covers the way for the development of more efficient and accessible DL techniques. Finally, our findings underscore the transformative impact of DL in reshaping the landscape of clinical neuroimaging, offering hope for enhanced patient care and groundbreaking discoveries in the field of neurology. This review paper is beneficial for neuropathologists and new researchers in this field.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"116 ","pages":"Article 102400"},"PeriodicalIF":5.7,"publicationDate":"2024-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141289967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A deep learning approach for virtual contrast enhancement in Contrast Enhanced Spectral Mammography 对比度增强型光谱乳腺 X 射线摄影中虚拟对比度增强的深度学习方法
IF 5.7 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2024-05-23 DOI: 10.1016/j.compmedimag.2024.102398
Aurora Rofena , Valerio Guarrasi , Marina Sarli , Claudia Lucia Piccolo , Matteo Sammarra , Bruno Beomonte Zobel , Paolo Soda
{"title":"A deep learning approach for virtual contrast enhancement in Contrast Enhanced Spectral Mammography","authors":"Aurora Rofena ,&nbsp;Valerio Guarrasi ,&nbsp;Marina Sarli ,&nbsp;Claudia Lucia Piccolo ,&nbsp;Matteo Sammarra ,&nbsp;Bruno Beomonte Zobel ,&nbsp;Paolo Soda","doi":"10.1016/j.compmedimag.2024.102398","DOIUrl":"https://doi.org/10.1016/j.compmedimag.2024.102398","url":null,"abstract":"<div><p>Contrast Enhanced Spectral Mammography (CESM) is a dual-energy mammographic imaging technique that first requires intravenously administering an iodinated contrast medium. Then, it collects both a low-energy image, comparable to standard mammography, and a high-energy image. The two scans are combined to get a recombined image showing contrast enhancement. Despite CESM diagnostic advantages for breast cancer diagnosis, the use of contrast medium can cause side effects, and CESM also beams patients with a higher radiation dose compared to standard mammography. To address these limitations, this work proposes using deep generative models for virtual contrast enhancement on CESM, aiming to make CESM contrast-free and reduce the radiation dose. Our deep networks, consisting of an autoencoder and two Generative Adversarial Networks, the Pix2Pix, and the CycleGAN, generate synthetic recombined images solely from low-energy images. We perform an extensive quantitative and qualitative analysis of the model’s performance, also exploiting radiologists’ assessments, on a novel CESM dataset that includes 1138 images. As a further contribution to this work, we make the dataset publicly available. The results show that CycleGAN is the most promising deep network to generate synthetic recombined images, highlighting the potential of artificial intelligence techniques for virtual contrast enhancement in this field.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"116 ","pages":"Article 102398"},"PeriodicalIF":5.7,"publicationDate":"2024-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0895611124000752/pdfft?md5=579b15387524c47940b3088af4489328&pid=1-s2.0-S0895611124000752-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141163469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning ensembles for detecting brain metastases in longitudinal multi-modal MRI studies 在纵向多模态磁共振成像研究中检测脑转移的深度学习组合
IF 5.7 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2024-05-22 DOI: 10.1016/j.compmedimag.2024.102401
Bartosz Machura , Damian Kucharski , Oskar Bozek , Bartosz Eksner , Bartosz Kokoszka , Tomasz Pekala , Mateusz Radom , Marek Strzelczak , Lukasz Zarudzki , Benjamín Gutiérrez-Becker , Agata Krason , Jean Tessier , Jakub Nalepa
{"title":"Deep learning ensembles for detecting brain metastases in longitudinal multi-modal MRI studies","authors":"Bartosz Machura ,&nbsp;Damian Kucharski ,&nbsp;Oskar Bozek ,&nbsp;Bartosz Eksner ,&nbsp;Bartosz Kokoszka ,&nbsp;Tomasz Pekala ,&nbsp;Mateusz Radom ,&nbsp;Marek Strzelczak ,&nbsp;Lukasz Zarudzki ,&nbsp;Benjamín Gutiérrez-Becker ,&nbsp;Agata Krason ,&nbsp;Jean Tessier ,&nbsp;Jakub Nalepa","doi":"10.1016/j.compmedimag.2024.102401","DOIUrl":"https://doi.org/10.1016/j.compmedimag.2024.102401","url":null,"abstract":"<div><p>Metastatic brain cancer is a condition characterized by the migration of cancer cells to the brain from extracranial sites. Notably, metastatic brain tumors surpass primary brain tumors in prevalence by a significant factor, they exhibit an aggressive growth potential and have the capacity to spread across diverse cerebral locations simultaneously. Magnetic resonance imaging (MRI) scans of individuals afflicted with metastatic brain tumors unveil a wide spectrum of characteristics. These lesions vary in size and quantity, spanning from tiny nodules to substantial masses captured within MRI. Patients may present with a limited number of lesions or an extensive burden of hundreds of them. Moreover, longitudinal studies may depict surgical resection cavities, as well as areas of necrosis or edema. Thus, the manual analysis of such MRI scans is difficult, user-dependent and cost-inefficient, and – importantly – it lacks reproducibility. We address these challenges and propose a pipeline for detecting and analyzing brain metastases in longitudinal studies, which benefits from an ensemble of various deep learning architectures originally designed for different downstream tasks (detection and segmentation). The experiments, performed over 275 multi-modal MRI scans of 87 patients acquired in 53 sites, coupled with rigorously validated manual annotations, revealed that our pipeline, built upon open-source tools to ensure its reproducibility, offers high-quality detection, and allows for precisely tracking the disease progression. To objectively quantify the generalizability of models, we introduce a new data stratification approach that accommodates the heterogeneity of the dataset and is used to elaborate training-test splits in a data-robust manner, alongside a new set of quality metrics to objectively assess algorithms. Our system provides a fully automatic and quantitative approach that may support physicians in a laborious process of disease progression tracking and evaluation of treatment efficacy.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"116 ","pages":"Article 102401"},"PeriodicalIF":5.7,"publicationDate":"2024-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0895611124000788/pdfft?md5=51dfd8fc9e95917b8971fa4297d3ea4e&pid=1-s2.0-S0895611124000788-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141095699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A 3D framework for segmentation of carotid artery vessel wall and identification of plaque compositions in multi-sequence MR images 在多序列磁共振图像中分割颈动脉血管壁并识别斑块成分的三维框架
IF 5.7 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2024-05-21 DOI: 10.1016/j.compmedimag.2024.102402
Jian Wang , Fan Yu , Mengze Zhang , Jie Lu , Zhen Qian
{"title":"A 3D framework for segmentation of carotid artery vessel wall and identification of plaque compositions in multi-sequence MR images","authors":"Jian Wang ,&nbsp;Fan Yu ,&nbsp;Mengze Zhang ,&nbsp;Jie Lu ,&nbsp;Zhen Qian","doi":"10.1016/j.compmedimag.2024.102402","DOIUrl":"10.1016/j.compmedimag.2024.102402","url":null,"abstract":"<div><p>Accurately assessing carotid artery wall thickening and identifying risky plaque components are critical for early diagnosis and risk management of carotid atherosclerosis. In this paper, we present a 3D framework for automated segmentation of the carotid artery vessel wall and identification of the compositions of carotid plaque in multi-sequence magnetic resonance (MR) images under the challenge of imperfect manual labeling. Manual labeling is commonly done in 2D slices of these multi-sequence MR images and often lacks perfect alignment across 2D slices and the multiple MR sequences, leading to labeling inaccuracies. To address such challenges, our framework is split into two parts: a segmentation subnetwork and a plaque component identification subnetwork. Initially, a 2D localization network pinpoints the carotid artery’s position, extracting the region of interest (ROI) from the input images. Following that, a signed-distance-map-enabled 3D U-net (Çiçek etal, 2016)an adaptation of the nnU-net (Ronneberger and Fischer, 2015) segments the carotid artery vessel wall. This method allows for the concurrent segmentation of the vessel wall area using the signed distance map (SDM) loss (Xue et al., 2020) which regularizes the segmentation surfaces in 3D and reduces erroneous segmentation caused by imperfect manual labels. Subsequently, the ROI of the input images and the obtained vessel wall masks are extracted and combined to obtain the identification results of plaque components in the identification subnetwork. Tailored data augmentation operations are introduced into the framework to reduce the false positive rate of calcification and hemorrhage identification. We trained and tested our proposed method on a dataset consisting of 115 patients, and it achieves an accurate segmentation result of carotid artery wall (0.8459 Dice), which is superior to the best result in published studies (0.7885 Dice). Our approach yielded accuracies of 0.82, 0.73 and 0.88 for the identification of calcification, lipid-rich core and hemorrhage components. Our proposed framework can be potentially used in clinical and research settings to help radiologists perform cumbersome reading tasks and evaluate the risk of carotid plaques.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"116 ","pages":"Article 102402"},"PeriodicalIF":5.7,"publicationDate":"2024-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141135573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing cancer prediction in challenging screen-detected incident lung nodules using time-series deep learning 利用时间序列深度学习加强对具有挑战性的筛查发现的偶发肺结节的癌症预测
IF 5.7 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2024-05-20 DOI: 10.1016/j.compmedimag.2024.102399
Shahab Aslani , Pavan Alluri , Eyjolfur Gudmundsson , Edward Chandy , John McCabe , Anand Devaraj , Carolyn Horst , Sam M. Janes , Rahul Chakkara , Daniel C. Alexander , SUMMIT consortium, Arjun Nair , Joseph Jacob
{"title":"Enhancing cancer prediction in challenging screen-detected incident lung nodules using time-series deep learning","authors":"Shahab Aslani ,&nbsp;Pavan Alluri ,&nbsp;Eyjolfur Gudmundsson ,&nbsp;Edward Chandy ,&nbsp;John McCabe ,&nbsp;Anand Devaraj ,&nbsp;Carolyn Horst ,&nbsp;Sam M. Janes ,&nbsp;Rahul Chakkara ,&nbsp;Daniel C. Alexander ,&nbsp;SUMMIT consortium,&nbsp;Arjun Nair ,&nbsp;Joseph Jacob","doi":"10.1016/j.compmedimag.2024.102399","DOIUrl":"10.1016/j.compmedimag.2024.102399","url":null,"abstract":"<div><p>Lung cancer screening (LCS) using annual computed tomography (CT) scanning significantly reduces mortality by detecting cancerous lung nodules at an earlier stage. Deep learning algorithms can improve nodule malignancy risk stratification. However, they have typically been used to analyse single time point CT data when detecting malignant nodules on either baseline or incident CT LCS rounds. Deep learning algorithms have the greatest value in two aspects. These approaches have great potential in assessing nodule change across time-series CT scans where subtle changes may be challenging to identify using the human eye alone. Moreover, they could be targeted to detect nodules developing on incident screening rounds, where cancers are generally smaller and more challenging to detect confidently.</p><p>Here, we show the performance of our Deep learning-based Computer-Aided Diagnosis model integrating Nodule and Lung imaging data with clinical Metadata Longitudinally (DeepCAD-NLM-L) for malignancy prediction. DeepCAD-NLM-L showed improved performance (AUC = 88%) against models utilizing single time-point data alone. DeepCAD-NLM-L also demonstrated comparable and complementary performance to radiologists when interpreting the most challenging nodules typically found in LCS programs. It also demonstrated similar performance to radiologists when assessed on out-of-distribution imaging dataset. The results emphasize the advantages of using time-series and multimodal analyses when interpreting malignancy risk in LCS.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"116 ","pages":"Article 102399"},"PeriodicalIF":5.7,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0895611124000764/pdfft?md5=8b33f33239dfe3edc77e2b30eb2fbd9c&pid=1-s2.0-S0895611124000764-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141136556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep neural network for the prediction of KRAS, NRAS, and BRAF genotypes in left-sided colorectal cancer based on histopathologic images 基于组织病理学图像预测左侧结直肠癌 KRAS、NRAS 和 BRAF 基因型的深度神经网络。
IF 5.7 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2024-05-12 DOI: 10.1016/j.compmedimag.2024.102384
Xuejie Li , Xianda Chi , Pinjie Huang , Qiong Liang , Jianpei Liu
{"title":"Deep neural network for the prediction of KRAS, NRAS, and BRAF genotypes in left-sided colorectal cancer based on histopathologic images","authors":"Xuejie Li ,&nbsp;Xianda Chi ,&nbsp;Pinjie Huang ,&nbsp;Qiong Liang ,&nbsp;Jianpei Liu","doi":"10.1016/j.compmedimag.2024.102384","DOIUrl":"10.1016/j.compmedimag.2024.102384","url":null,"abstract":"<div><h3>Background</h3><p>The KRAS, NRAS, and BRAF genotypes are critical for selecting targeted therapies for patients with metastatic colorectal cancer (mCRC). Here, we aimed to develop a deep learning model that utilizes pathologic whole-slide images (WSIs) to accurately predict the status of KRAS, NRAS, and BRAF<sup>V600E</sup>.</p></div><div><h3>Methods</h3><p>129 patients with left-sided colon cancer and rectal cancer from the Third Affiliated Hospital of Sun Yat-sen University were assigned to the training and testing cohorts. Utilizing three convolutional neural networks (ResNet18, ResNet50, and Inception v3), we extracted 206 pathological features from H&amp;E-stained WSIs, serving as the foundation for constructing specific pathological models. A clinical feature model was then developed, with carcinoembryonic antigen (CEA) identified through comprehensive multiple regression analysis as the key biomarker. Subsequently, these two models were combined to create a clinical-pathological integrated model, resulting in a total of three genetic prediction models.</p></div><div><h3>Result</h3><p>103 patients were evaluated in the training cohort (1782,302 image tiles), while the remaining 26 patients were enrolled in the testing cohort (489,481 image tiles). Compared with the clinical model and the pathology model, the combined model which incorporated CEA levels and pathological signatures, showed increased predictive ability, with an area under the curve (AUC) of 0.96 in the training and an AUC of 0.83 in the testing cohort, accompanied by a high positive predictive value (PPV 0.92).</p></div><div><h3>Conclusion</h3><p>The combined model demonstrated a considerable ability to accurately predict the status of KRAS, NRAS, and BRAF<sup>V600E</sup> in patients with left-sided colorectal cancer, with potential application to assist doctors in developing targeted treatment strategies for mCRC patients, and effectively identifying mutations and eliminating the need for confirmatory genetic testing.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"115 ","pages":"Article 102384"},"PeriodicalIF":5.7,"publicationDate":"2024-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140960862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unsupervised lung CT image registration via stochastic decomposition of deformation fields 通过随机分解变形场实现无监督肺部 CT 图像配准
IF 5.7 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2024-05-07 DOI: 10.1016/j.compmedimag.2024.102397
Jing Zou , Youyi Song , Lihao Liu , Angelica I. Aviles-Rivero , Jing Qin
{"title":"Unsupervised lung CT image registration via stochastic decomposition of deformation fields","authors":"Jing Zou ,&nbsp;Youyi Song ,&nbsp;Lihao Liu ,&nbsp;Angelica I. Aviles-Rivero ,&nbsp;Jing Qin","doi":"10.1016/j.compmedimag.2024.102397","DOIUrl":"https://doi.org/10.1016/j.compmedimag.2024.102397","url":null,"abstract":"<div><p>We address the problem of lung CT image registration, which underpins various diagnoses and treatments for lung diseases. The main crux of the problem is the large deformation that the lungs undergo during respiration. This physiological process imposes several challenges from a learning point of view. In this paper, we propose a novel training scheme, called stochastic decomposition, which enables deep networks to effectively learn such a difficult deformation field during lung CT image registration. The key idea is to stochastically decompose the deformation field, and supervise the registration by synthetic data that have the corresponding appearance discrepancy. The stochastic decomposition allows for revealing all possible decompositions of the deformation field. At the learning level, these decompositions can be seen as a prior to reduce the ill-posedness of the registration yielding to boost the performance. We demonstrate the effectiveness of our framework on Lung CT data. We show, through extensive numerical and visual results, that our technique outperforms existing methods.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"115 ","pages":"Article 102397"},"PeriodicalIF":5.7,"publicationDate":"2024-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140910060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Weakly-supervised preclinical tumor localization associated with survival prediction from lung cancer screening Chest X-ray images 弱监督临床前肿瘤定位与肺癌筛查胸部 X 光图像的生存预测相关联
IF 5.7 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2024-05-07 DOI: 10.1016/j.compmedimag.2024.102395
Renato Hermoza , Jacinto C. Nascimento , Gustavo Carneiro
{"title":"Weakly-supervised preclinical tumor localization associated with survival prediction from lung cancer screening Chest X-ray images","authors":"Renato Hermoza ,&nbsp;Jacinto C. Nascimento ,&nbsp;Gustavo Carneiro","doi":"10.1016/j.compmedimag.2024.102395","DOIUrl":"https://doi.org/10.1016/j.compmedimag.2024.102395","url":null,"abstract":"<div><p>In this paper, we hypothesize that it is possible to localize image regions of preclinical tumors in a Chest X-ray (CXR) image by a weakly-supervised training of a survival prediction model using a dataset containing CXR images of healthy patients and their time-to-death label. These visual explanations can empower clinicians in early lung cancer detection and increase patient awareness of their susceptibility to the disease. To test this hypothesis, we train a censor-aware multi-class survival prediction deep learning classifier that is robust to imbalanced training, where classes represent quantized number of days for time-to-death prediction. Such multi-class model allows us to use post-hoc interpretability methods, such as Grad-CAM, to localize image regions of preclinical tumors. For the experiments, we propose a new benchmark based on the National Lung Cancer Screening Trial (NLST) dataset to test weakly-supervised preclinical tumor localization and survival prediction models, and results suggest that our proposed method shows state-of-the-art C-index survival prediction and weakly-supervised preclinical tumor localization results. To our knowledge, this constitutes a pioneer approach in the field that is able to produce visual explanations of preclinical events associated with survival prediction results.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"115 ","pages":"Article 102395"},"PeriodicalIF":5.7,"publicationDate":"2024-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0895611124000727/pdfft?md5=13bd653784bd57b091f5c80e427ca52e&pid=1-s2.0-S0895611124000727-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140901751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GNN-based structural information to improve DNN-based basal ganglia segmentation in children following early brain lesion 基于 GNN 的结构信息改进基于 DNN 的儿童早期脑损伤基底节区段划分
IF 5.7 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2024-05-07 DOI: 10.1016/j.compmedimag.2024.102396
Patty Coupeau , Jean-Baptiste Fasquel , Lucie Hertz-Pannier , Mickaël Dinomais
{"title":"GNN-based structural information to improve DNN-based basal ganglia segmentation in children following early brain lesion","authors":"Patty Coupeau ,&nbsp;Jean-Baptiste Fasquel ,&nbsp;Lucie Hertz-Pannier ,&nbsp;Mickaël Dinomais","doi":"10.1016/j.compmedimag.2024.102396","DOIUrl":"https://doi.org/10.1016/j.compmedimag.2024.102396","url":null,"abstract":"<div><p>Analyzing the basal ganglia following an early brain lesion is crucial due to their noteworthy role in sensory–motor functions. However, the segmentation of these subcortical structures on MRI is challenging in children and is further complicated by the presence of a lesion. Although current deep neural networks (DNN) perform well in segmenting subcortical brain structures in healthy brains, they lack robustness when faced with lesion variability, leading to structural inconsistencies. Given the established spatial organization of the basal ganglia, we propose enhancing the DNN-based segmentation through post-processing with a graph neural network (GNN). The GNN conducts node classification on graphs encoding both class probabilities and spatial information regarding the regions segmented by the DNN. In this study, we focus on neonatal arterial ischemic stroke (NAIS) in children. The approach is evaluated on both healthy children and children after NAIS using three DNN backbones: U-Net, UNETr, and MSGSE-Net. The results show an improvement in segmentation performance, with an increase in the median Dice score by up to 4% and a reduction in the median Hausdorff distance (HD) by up to 93% for healthy children (from 36.45 to 2.57) and up to 91% for children suffering from NAIS (from 40.64 to 3.50). The performance of the method is compared with atlas-based methods. Severe cases of neonatal stroke result in a decline in performance in the injured hemisphere, without negatively affecting the segmentation of the contra-injured hemisphere. Furthermore, the approach demonstrates resilience to small training datasets, a widespread challenge in the medical field, particularly in pediatrics and for rare pathologies.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"115 ","pages":"Article 102396"},"PeriodicalIF":5.7,"publicationDate":"2024-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140914459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leveraging a realistic synthetic database to learn Shape-from-Shading for estimating the colon depth in colonoscopy images 利用逼真的合成数据库学习阴影形状,以估计结肠镜图像中的结肠深度
IF 5.7 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2024-05-03 DOI: 10.1016/j.compmedimag.2024.102390
Josué Ruano , Martín Gómez , Eduardo Romero , Antoine Manzanera
{"title":"Leveraging a realistic synthetic database to learn Shape-from-Shading for estimating the colon depth in colonoscopy images","authors":"Josué Ruano ,&nbsp;Martín Gómez ,&nbsp;Eduardo Romero ,&nbsp;Antoine Manzanera","doi":"10.1016/j.compmedimag.2024.102390","DOIUrl":"https://doi.org/10.1016/j.compmedimag.2024.102390","url":null,"abstract":"<div><p>Colonoscopy is the choice procedure to diagnose, screening, and treat the colon and rectum cancer, from early detection of small precancerous lesions (polyps), to confirmation of malign masses. However, the high variability of the organ appearance and the complex shape of both the colon wall and structures of interest make this exploration difficult. Learned visuospatial and perceptual abilities mitigate technical limitations in clinical practice by proper estimation of the intestinal depth. This work introduces a novel methodology to estimate colon depth maps in single frames from monocular colonoscopy videos. The generated depth map is inferred from the shading variation of the colon wall with respect to the light source, as learned from a realistic synthetic database. Briefly, a classic convolutional neural network architecture is trained from scratch to estimate the depth map, improving sharp depth estimations in haustral folds and polyps by a custom loss function that minimizes the estimation error in edges and curvatures. The network was trained by a custom synthetic colonoscopy database herein constructed and released, composed of 248<!--> <!-->400 frames (47 videos), with depth annotations at the level of pixels. This collection comprehends 5 subsets of videos with progressively higher levels of visual complexity. Evaluation of the depth estimation with the synthetic database reached a threshold accuracy of 95.65%, and a mean-RMSE of <span><math><mrow><mn>0</mn><mo>.</mo><mn>451</mn><mi>cm</mi></mrow></math></span>, while a qualitative assessment with a real database showed consistent depth estimations, visually evaluated by the expert gastroenterologist coauthoring this paper. Finally, the method achieved competitive performance with respect to another state-of-the-art method using a public synthetic database and comparable results in a set of images with other five state-of-the-art methods. Additionally, three-dimensional reconstructions demonstrated useful approximations of the gastrointestinal tract geometry. Code for reproducing the reported results and the dataset are available at <span>https://github.com/Cimalab-unal/ColonDepthEstimation</span><svg><path></path></svg>.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"115 ","pages":"Article 102390"},"PeriodicalIF":5.7,"publicationDate":"2024-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0895611124000673/pdfft?md5=bb898a4ca8669cb2b3cd2808af60a2b6&pid=1-s2.0-S0895611124000673-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140842801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信