Computerized Medical Imaging and Graphics最新文献

筛选
英文 中文
Computational modeling of tumor invasion from limited and diverse data in Glioblastoma 从胶质母细胞瘤的有限和多样数据中建立肿瘤侵袭的计算模型。
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2024-10-01 DOI: 10.1016/j.compmedimag.2024.102436
Padmaja Jonnalagedda , Brent Weinberg , Taejin L. Min , Shiv Bhanu , Bir Bhanu
{"title":"Computational modeling of tumor invasion from limited and diverse data in Glioblastoma","authors":"Padmaja Jonnalagedda ,&nbsp;Brent Weinberg ,&nbsp;Taejin L. Min ,&nbsp;Shiv Bhanu ,&nbsp;Bir Bhanu","doi":"10.1016/j.compmedimag.2024.102436","DOIUrl":"10.1016/j.compmedimag.2024.102436","url":null,"abstract":"<div><div>For diseases with high morbidity rates such as Glioblastoma Multiforme, the prognostic and treatment planning pipeline requires a comprehensive analysis of imaging, clinical, and molecular data. Many mutations have been shown to correlate strongly with the median survival rate and response to therapy of patients. Studies have demonstrated that these mutations manifest as specific visual biomarkers in tumor imaging modalities such as MRI. To minimize the number of invasive procedures on a patient and for the overall resource optimization for the prognostic and treatment planning process, the correlation of imaging and molecular features has garnered much interest. While the tumor mass is the most significant feature, the impacted tissue surrounding the tumor is also a significant biomarker contributing to the visual manifestation of mutations — which has not been studied as extensively. The pattern of tumor growth impacts the surrounding tissue accordingly, which is a reflection of tumor properties as well. Modeling how the tumor growth impacts the surrounding tissue can reveal important information about the patterns of tumor enhancement, which in turn has significant diagnostic and prognostic value. This paper presents the first work to automate the computational modeling of the impacted tissue surrounding the tumor using generative deep learning. The paper isolates and quantifies the impact of the Tumor Invasion (TI) on surrounding tissue based on change in mutation status, subsequently assessing its prognostic value. Furthermore, a TI Generative Adversarial Network (TI-GAN) is proposed to model the tumor invasion properties. Extensive qualitative and quantitative analyses, cross-dataset testing, and radiologist blind tests are carried out to demonstrate that TI-GAN can realistically model the tumor invasion under practical challenges of medical datasets such as limited data and high intra-class heterogeneity.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"117 ","pages":"Article 102436"},"PeriodicalIF":5.4,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142331732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detecting thyroid nodules along with surrounding tissues and tracking nodules using motion prior in ultrasound videos 检测甲状腺结节和周围组织,并利用超声视频中的运动先验追踪结节。
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2024-10-01 DOI: 10.1016/j.compmedimag.2024.102439
Song Gao , Yueyang Li , Haichi Luo
{"title":"Detecting thyroid nodules along with surrounding tissues and tracking nodules using motion prior in ultrasound videos","authors":"Song Gao ,&nbsp;Yueyang Li ,&nbsp;Haichi Luo","doi":"10.1016/j.compmedimag.2024.102439","DOIUrl":"10.1016/j.compmedimag.2024.102439","url":null,"abstract":"<div><div>Ultrasound examination plays a crucial role in the clinical diagnosis of thyroid nodules. Although deep learning technology has been applied to thyroid nodule examinations, the existing methods all overlook the prior knowledge of nodules moving along a straight line in the video. We propose a new detection model, DiffusionVID-Line, and design a novel tracking algorithm, ByteTrack-Line, both of which fully leverage the prior knowledge of linear motion of nodules in thyroid ultrasound videos. Among them, ByteTrack-Line groups detected nodules, further reducing the workload of doctors and significantly improving their diagnostic speed and accuracy. In DiffusionVID-Line, we propose two new modules: Freq-FPN and Attn-Line. Freq-FPN module is used to extract frequency features, taking advantage of these features to reduce the impact of image blur in ultrasound videos. Based on the standard practice of segmented scanning by doctors, Attn-Line module enhances the attention on targets moving along a straight line, thus improving the accuracy of detection. In ByteTrack-Line, considering the characteristic of linear motion of nodules, we propose the Match-Line association module, which reduces the number of nodule ID switches. In the testing of the detection and tracking datasets, DiffusionVID-Line achieved a mean Average Precision (mAP50) of 74.2 for multiple tissues and 85.6 for nodules, while ByteTrack-Line achieved a Multiple Object Tracking Accuracy (MOTA) of 83.4. Both nodule detection and tracking have achieved state-of-the-art performance.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"117 ","pages":"Article 102439"},"PeriodicalIF":5.4,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142367305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RibFractureSys: A gem in the face of acute rib fracture diagnoses RibFractureSys:诊断急性肋骨骨折的瑰宝。
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2024-10-01 DOI: 10.1016/j.compmedimag.2024.102429
Riel Castro-Zunti , Kaike Li , Aleti Vardhan , Younhee Choi , Gong Yong Jin , Seok-bum Ko
{"title":"RibFractureSys: A gem in the face of acute rib fracture diagnoses","authors":"Riel Castro-Zunti ,&nbsp;Kaike Li ,&nbsp;Aleti Vardhan ,&nbsp;Younhee Choi ,&nbsp;Gong Yong Jin ,&nbsp;Seok-bum Ko","doi":"10.1016/j.compmedimag.2024.102429","DOIUrl":"10.1016/j.compmedimag.2024.102429","url":null,"abstract":"<div><div>Rib fracture patients, common in trauma wards, have different mortality rates and comorbidities depending on how many and which ribs are fractured. This knowledge is therefore paramount to make accurate prognoses and prioritize patient care. However, tracking 24 ribs over upwards 200+ frames in a patient’s scan is time-consuming and error-prone for radiologists, especially depending on their experience.</div><div>We propose an automated, modular, three-stage solution to assist radiologists. Using 9 fully annotated patient scans, we trained a multi-class U-Net to segment rib lesions and common anatomical clutter. To recognize rib fractures and mitigate false positives, we fine-tuned a ResNet-based model using 5698 false positives, 2037 acute fractures, 4786 healed fractures, and 14,904 unfractured rib lesions. Using almost 200 patient cases, we developed a highly task-customized multi-object rib lesion tracker to determine which lesions in a frame belong to which of the 12 ribs on either side; bounding box intersection over union- and centroid-based tracking, a line-crossing methodology, and various heuristics were utilized. Our system accepts an axial CT scan and processes, labels, and color-codes the scan.</div><div>Over an internal validation dataset of 1000 acute rib fracture and 1000 control patients, our system, assessed by a 3-year radiologist resident, achieved 96.1% and 97.3% correct fracture classification accuracy for rib fracture and control patients, respectively. However, 18.0% and 20.8% of these patients, respectively, had incorrect rib labeling. Percentages remained consistent across sex and age demographics. Labeling issues include anatomical clutter being mislabeled as ribs and ribs going unlabeled.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"117 ","pages":"Article 102429"},"PeriodicalIF":5.4,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142367306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine learning-based diagnostics of capsular invasion in thyroid nodules with wide-field second harmonic generation microscopy 基于机器学习的宽视场二次谐波发生显微镜甲状腺结节囊性侵袭诊断。
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2024-10-01 DOI: 10.1016/j.compmedimag.2024.102440
Yaraslau Padrez , Lena Golubewa , Igor Timoshchenko , Adrian Enache , Lucian G. Eftimie , Radu Hristu , Danielis Rutkauskas
{"title":"Machine learning-based diagnostics of capsular invasion in thyroid nodules with wide-field second harmonic generation microscopy","authors":"Yaraslau Padrez ,&nbsp;Lena Golubewa ,&nbsp;Igor Timoshchenko ,&nbsp;Adrian Enache ,&nbsp;Lucian G. Eftimie ,&nbsp;Radu Hristu ,&nbsp;Danielis Rutkauskas","doi":"10.1016/j.compmedimag.2024.102440","DOIUrl":"10.1016/j.compmedimag.2024.102440","url":null,"abstract":"<div><div>Papillary thyroid carcinoma (PTC) is one of the most common, well-differentiated carcinomas of the thyroid gland. PTC nodules are often surrounded by a collagen capsule that prevents the spread of cancer cells. However, as the malignant tumor progresses, the integrity of this protective barrier is compromised, and cancer cells invade the surroundings. The detection of capsular invasion is, therefore, crucial for the diagnosis and the choice of treatment and the development of new approaches aimed at the increase of diagnostic performance are of great importance. In the present study, we exploited the wide-field second harmonic generation (SHG) microscopy in combination with texture analysis and unsupervised machine learning (ML) to explore the possibility of quantitative characterization of collagen structure in the capsule and designation of different capsule areas as either intact, disrupted by invasion, or apt to invasion. Two-step <em>k</em>-means clustering showed that the collagen capsules in all analyzed tissue sections were highly heterogeneous and exhibited distinct segments described by characteristic ML parameter sets. The latter allowed a structural interpretation of the collagen fibers at the sites of overt invasion as fragmented and curled fibers with rarely formed distributed networks. Clustering analysis also distinguished areas in the PTC capsule that were not categorized as invasion sites by the initial histopathological analysis but could be recognized as prospective micro-invasions after additional inspection. The characteristic features of suspicious and invasive sites identified by the proposed unsupervised ML approach can become a reliable complement to existing methods for diagnosing encapsulated PTC, increase the reliability of diagnosis, simplify decision making, and prevent human-related diagnostic errors. In addition, the proposed automated ML-based selection of collagen capsule images and exclusion of non-informative regions can greatly accelerate and simplify the development of reliable methods for fully automated ML diagnosis that can be integrated into clinical practice.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"117 ","pages":"Article 102440"},"PeriodicalIF":5.4,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142394946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic MRI interpolation in temporal direction using an unsupervised generative model 利用无监督生成模型进行动态磁共振成像时间方向插值
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2024-09-22 DOI: 10.1016/j.compmedimag.2024.102435
Corbin Maciel , Qing Zou
{"title":"Dynamic MRI interpolation in temporal direction using an unsupervised generative model","authors":"Corbin Maciel ,&nbsp;Qing Zou","doi":"10.1016/j.compmedimag.2024.102435","DOIUrl":"10.1016/j.compmedimag.2024.102435","url":null,"abstract":"<div><h3>Purpose</h3><div>Cardiac cine magnetic resonance imaging (MRI) is an important tool in assessing dynamic heart function. However, this technique requires long acquisition time and long breath holds, which presents difficulties. The aim of this study is to propose an unsupervised neural network framework that can perform cardiac cine interpolation in time, so that we can increase the temporal resolution of cardiac cine without increasing acquisition time.</div></div><div><h3>Methods</h3><div>In this study, a subject-specific unsupervised generative neural network is designed to perform temporal interpolation for cardiac cine MRI. The network takes in a 2D latent vector in which each element corresponds to one cardiac phase in the cardiac cycle and then the network outputs the cardiac cine images which are acquired on the scanner. After the training of the generative network, we can interpolate the 2D latent vector and input the interpolated latent vector into the network and the network will output the frame-interpolated cine images. The results of the proposed cine interpolation neural network (CINN) framework are compared quantitatively and qualitatively with other state-of-the-art methods, the ground truth training cine frames, and the ground truth frames removed from the original acquisition. Signal-to-noise ratio (SNR), structural similarity index measures (SSIM), peak signal-to-noise ratio (PSNR), strain analysis, as well as the sharpness calculated using the Tenengrad algorithm were used for image quality assessment.</div></div><div><h3>Results</h3><div>As shown quantitatively and qualitatively, the proposed framework learns the generative task well and hence performs the temporal interpolation task well. Furthermore, both quantitative and qualitative comparison studies show the effectiveness of the proposed framework in cardiac cine interpolation in time.</div></div><div><h3>Conclusion</h3><div>The proposed generative model can effectively learn the generative task and perform high quality cardiac cine interpolation in time.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"117 ","pages":"Article 102435"},"PeriodicalIF":5.4,"publicationDate":"2024-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142318453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BreasTDLUSeg: A coarse-to-fine framework for segmentation of breast terminal duct lobular units on histopathological whole-slide images BreasTDLUSeg:在组织病理学全切片图像上分割乳腺末端导管小叶单元的粗到细框架。
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2024-09-19 DOI: 10.1016/j.compmedimag.2024.102432
Zixiao Lu , Kai Tang , Yi Wu , Xiaoxuan Zhang , Ziqi An , Xiongfeng Zhu , Qianjin Feng , Yinghua Zhao
{"title":"BreasTDLUSeg: A coarse-to-fine framework for segmentation of breast terminal duct lobular units on histopathological whole-slide images","authors":"Zixiao Lu ,&nbsp;Kai Tang ,&nbsp;Yi Wu ,&nbsp;Xiaoxuan Zhang ,&nbsp;Ziqi An ,&nbsp;Xiongfeng Zhu ,&nbsp;Qianjin Feng ,&nbsp;Yinghua Zhao","doi":"10.1016/j.compmedimag.2024.102432","DOIUrl":"10.1016/j.compmedimag.2024.102432","url":null,"abstract":"<div><div>Automatic segmentation of breast terminal duct lobular units (TDLUs) on histopathological whole-slide images (WSIs) is crucial for the quantitative evaluation of TDLUs in the diagnostic and prognostic analysis of breast cancer. However, TDLU segmentation remains a great challenge due to its highly heterogeneous sizes, structures, and morphologies as well as the small areas on WSIs. In this study, we propose BreasTDLUSeg, an efficient coarse-to-fine two-stage framework based on multi-scale attention to achieve localization and precise segmentation of TDLUs on hematoxylin and eosin (H&amp;E)-stained WSIs. BreasTDLUSeg consists of two networks: a superpatch-based patch-level classification network (SPPC-Net) and a patch-based pixel-level segmentation network (PPS-Net). SPPC-Net takes a superpatch as input and adopts a sub-region classification head to classify each patch within the superpatch as TDLU positive or negative. PPS-Net takes the TDLU positive patches derived from SPPC-Net as input. PPS-Net deploys a multi-scale CNN-Transformer as an encoder to learn enhanced multi-scale morphological representations and an upsampler to generate pixel-wise segmentation masks for the TDLU positive patches. We also constructed two breast cancer TDLU datasets containing a total of 530 superpatch images with patch-level annotations and 2322 patch images with pixel-level annotations to enable the development of TDLU segmentation methods. Experiments on the two datasets demonstrate that BreasTDLUSeg outperforms other state-of-the-art methods with the highest Dice similarity coefficients of 79.97% and 92.93%, respectively. The proposed method shows great potential to assist pathologists in the pathological analysis of breast cancer. An open-source implementation of our approach can be found at <span><span>https://github.com/Dian-kai/BreasTDLUSeg</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"118 ","pages":"Article 102432"},"PeriodicalIF":5.4,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142512335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Main challenges on the curation of large scale datasets for pancreas segmentation using deep learning in multi-phase CT scans: Focus on cardinality, manual refinement, and annotation quality 利用深度学习在多期 CT 扫描中进行胰腺分割的大型数据集整理工作面临的主要挑战:重点关注万有引力、人工完善和注释质量
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2024-09-13 DOI: 10.1016/j.compmedimag.2024.102434
Matteo Cavicchioli , Andrea Moglia , Ludovica Pierelli , Giacomo Pugliese , Pietro Cerveri
{"title":"Main challenges on the curation of large scale datasets for pancreas segmentation using deep learning in multi-phase CT scans: Focus on cardinality, manual refinement, and annotation quality","authors":"Matteo Cavicchioli ,&nbsp;Andrea Moglia ,&nbsp;Ludovica Pierelli ,&nbsp;Giacomo Pugliese ,&nbsp;Pietro Cerveri","doi":"10.1016/j.compmedimag.2024.102434","DOIUrl":"10.1016/j.compmedimag.2024.102434","url":null,"abstract":"<div><p>Accurate segmentation of the pancreas in computed tomography (CT) holds paramount importance in diagnostics, surgical planning, and interventions. Recent studies have proposed supervised deep-learning models for segmentation, but their efficacy relies on the quality and quantity of the training data. Most of such works employed small-scale public datasets, without proving the efficacy of generalization to external datasets. This study explored the optimization of pancreas segmentation accuracy by pinpointing the ideal dataset size, understanding resource implications, examining manual refinement impact, and assessing the influence of anatomical subregions. We present the AIMS-1300 dataset encompassing 1,300 CT scans. Its manual annotation by medical experts required 938 h. A 2.5D UNet was implemented to assess the impact of training sample size on segmentation accuracy by partitioning the original AIMS-1300 dataset into 11 smaller subsets of progressively increasing numerosity. The findings revealed that training sets exceeding 440 CTs did not lead to better segmentation performance. In contrast, nnU-Net and UNet with Attention Gate reached a plateau for 585 CTs. Tests on generalization on the publicly available AMOS-CT dataset confirmed this outcome. As the size of the partition of the AIMS-1300 training set increases, the number of error slices decreases, reaching a minimum with 730 and 440 CTs, for AIMS-1300 and AMOS-CT datasets, respectively. Segmentation metrics on the AIMS-1300 and AMOS-CT datasets improved more on the head than the body and tail of the pancreas as the dataset size increased. By carefully considering the task and the characteristics of the available data, researchers can develop deep learning models without sacrificing performance even with limited data. This could accelerate developing and deploying artificial intelligence tools for pancreas surgery and other surgical data science applications.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"117 ","pages":"Article 102434"},"PeriodicalIF":5.4,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0895611124001113/pdfft?md5=45c57efca48ce73af49c585837730bf7&pid=1-s2.0-S0895611124001113-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142233870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards explainable oral cancer recognition: Screening on imperfect images via Informed Deep Learning and Case-Based Reasoning 实现可解释的口腔癌识别:通过知情深度学习和基于案例的推理对不完美图像进行筛查
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2024-09-11 DOI: 10.1016/j.compmedimag.2024.102433
Marco Parola , Federico A. Galatolo , Gaetano La Mantia , Mario G.C.A. Cimino , Giuseppina Campisi , Olga Di Fede
{"title":"Towards explainable oral cancer recognition: Screening on imperfect images via Informed Deep Learning and Case-Based Reasoning","authors":"Marco Parola ,&nbsp;Federico A. Galatolo ,&nbsp;Gaetano La Mantia ,&nbsp;Mario G.C.A. Cimino ,&nbsp;Giuseppina Campisi ,&nbsp;Olga Di Fede","doi":"10.1016/j.compmedimag.2024.102433","DOIUrl":"10.1016/j.compmedimag.2024.102433","url":null,"abstract":"<div><p>Oral squamous cell carcinoma recognition presents a challenge due to late diagnosis and costly data acquisition. A cost-efficient, computerized screening system is crucial for early disease detection, minimizing the need for expert intervention and expensive analysis. Besides, transparency is essential to align these systems with critical sector applications. Explainable Artificial Intelligence (XAI) provides techniques for understanding models. However, current XAI is mostly data-driven and focused on addressing developers’ requirements of improving models rather than clinical users’ demands for expressing relevant insights. Among different XAI strategies, we propose a solution composed of Case-Based Reasoning paradigm to provide visual output explanations and Informed Deep Learning (IDL) to integrate medical knowledge within the system. A key aspect of our solution lies in its capability to handle data imperfections, including labeling inaccuracies and artifacts, thanks to an ensemble architecture on top of the deep learning (DL) workflow. We conducted several experimental benchmarks on a dataset collected in collaboration with medical centers. Our findings reveal that employing the IDL approach yields an accuracy of 85%, surpassing the 77% accuracy achieved by DL alone. Furthermore, we measured the human-centered explainability of the two approaches and IDL generates explanations more congruent with the clinical user demands.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"117 ","pages":"Article 102433"},"PeriodicalIF":5.4,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0895611124001101/pdfft?md5=96dc2c2aa07fe4189f5653f58184b9b0&pid=1-s2.0-S0895611124001101-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142228887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hematoma expansion prediction in intracerebral hemorrhage patients by using synthesized CT images in an end-to-end deep learning framework 在端到端深度学习框架中使用合成 CT 图像预测脑内出血患者血肿扩大情况
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2024-09-05 DOI: 10.1016/j.compmedimag.2024.102430
Cansu Yalcin , Valeriia Abramova , Mikel Terceño , Arnau Oliver , Yolanda Silva , Xavier Lladó
{"title":"Hematoma expansion prediction in intracerebral hemorrhage patients by using synthesized CT images in an end-to-end deep learning framework","authors":"Cansu Yalcin ,&nbsp;Valeriia Abramova ,&nbsp;Mikel Terceño ,&nbsp;Arnau Oliver ,&nbsp;Yolanda Silva ,&nbsp;Xavier Lladó","doi":"10.1016/j.compmedimag.2024.102430","DOIUrl":"10.1016/j.compmedimag.2024.102430","url":null,"abstract":"<div><p>Spontaneous intracerebral hemorrhage (ICH) is a type of stroke less prevalent than ischemic stroke but associated with high mortality rates. Hematoma expansion (HE) is an increase in the bleeding that affects 30%–38% of hemorrhagic stroke patients. It is observed within 24 h of onset and associated with patient worsening. Clinically it is relevant to detect the patients that will develop HE from their initial computed tomography (CT) scans which could improve patient management and treatment decisions. However, this is a significant challenge due to the predictive nature of the task and its low prevalence, which hinders the availability of large datasets with the required longitudinal information. In this work, we present an end-to-end deep learning framework capable of predicting which cases will exhibit HE using only the initial basal image. We introduce a deep learning framework based on the 2D EfficientNet B0 model to predict the occurrence of HE using initial non-contrasted CT scans and their corresponding lesion annotation as priors. We used an in-house acquired dataset of 122 ICH patients, including 35 HE cases, containing longitudinal CT scans with manual lesion annotations in both basal and follow-up (obtained within 24 h after the basal scan). Experiments were conducted using a 5-fold cross-validation strategy. We addressed the limited data problem by incorporating synthetic images into the training process. To the best of our knowledge, our approach is novel in the field of HE prediction, being the first to use image synthesis to enhance results. We studied different scenarios such as training only with the original scans, using standard image augmentation techniques, and using synthetic image generation. The best performance was achieved by adding five generated versions of each image, along with standard data augmentation, during the training process. This significantly improved (<span><math><mrow><mi>p</mi><mo>=</mo><mn>0</mn><mo>.</mo><mn>0003</mn></mrow></math></span>) the performance obtained with our baseline model using directly the original CT scans from an Accuracy of 0.56 to 0.84, F1-Score of 0.53 to 0.82, Sensitivity of 0.51 to 0.77, and Specificity of 0.60 to 0.91, respectively. The proposed approach shows promising results in predicting HE, especially with the inclusion of synthetically generated images. The obtained results highlight the significance of this research direction, which has the potential to improve the clinical management of patients with hemorrhagic stroke. The code is available at: <span><span>https://github.com/NIC-VICOROB/HE-prediction-SynthCT</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"117 ","pages":"Article 102430"},"PeriodicalIF":5.4,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0895611124001071/pdfft?md5=c274f51b14553bc438e5f61e76ba2a00&pid=1-s2.0-S0895611124001071-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142161734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CycleSGAN: A cycle-consistent and semantics-preserving generative adversarial network for unpaired MR-to-CT image synthesis CycleSGAN:用于无配对 MR-CT 图像合成的周期一致性和语义保留生成对抗网络。
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2024-09-04 DOI: 10.1016/j.compmedimag.2024.102431
Runze Wang , Alexander F. Heimann , Moritz Tannast , Guoyan Zheng
{"title":"CycleSGAN: A cycle-consistent and semantics-preserving generative adversarial network for unpaired MR-to-CT image synthesis","authors":"Runze Wang ,&nbsp;Alexander F. Heimann ,&nbsp;Moritz Tannast ,&nbsp;Guoyan Zheng","doi":"10.1016/j.compmedimag.2024.102431","DOIUrl":"10.1016/j.compmedimag.2024.102431","url":null,"abstract":"<div><p>CycleGAN has been leveraged to synthesize a CT image from an available MR image after trained on unpaired data. Due to the lack of direct constraints between the synthetic and the input images, CycleGAN cannot guarantee structural consistency and often generates inaccurate mappings that shift the anatomy, which is highly undesirable for downstream clinical applications such as MRI-guided radiotherapy treatment planning and PET/MRI attenuation correction. In this paper, we propose a cycle-consistent and semantics-preserving generative adversarial network, referred as CycleSGAN, for unpaired MR-to-CT image synthesis. Our design features a novel and generic way to incorporate semantic information into CycleGAN. This is done by designing a pair of three-player games within the CycleGAN framework where each three-player game consists of one generator and two discriminators to formulate two distinct types of adversarial learning: appearance adversarial learning and structure adversarial learning. These two types of adversarial learning are alternately trained to ensure both realistic image synthesis and semantic structure preservation. Results on unpaired hip MR-to-CT image synthesis show that our method produces better synthetic CT images in both accuracy and visual quality as compared to other state-of-the-art (SOTA) unpaired MR-to-CT image synthesis methods.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"117 ","pages":"Article 102431"},"PeriodicalIF":5.4,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142146765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信