Computerized Medical Imaging and Graphics最新文献

筛选
英文 中文
Main challenges on the curation of large scale datasets for pancreas segmentation using deep learning in multi-phase CT scans: Focus on cardinality, manual refinement, and annotation quality 利用深度学习在多期 CT 扫描中进行胰腺分割的大型数据集整理工作面临的主要挑战:重点关注万有引力、人工完善和注释质量
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2024-09-13 DOI: 10.1016/j.compmedimag.2024.102434
Matteo Cavicchioli , Andrea Moglia , Ludovica Pierelli , Giacomo Pugliese , Pietro Cerveri
{"title":"Main challenges on the curation of large scale datasets for pancreas segmentation using deep learning in multi-phase CT scans: Focus on cardinality, manual refinement, and annotation quality","authors":"Matteo Cavicchioli ,&nbsp;Andrea Moglia ,&nbsp;Ludovica Pierelli ,&nbsp;Giacomo Pugliese ,&nbsp;Pietro Cerveri","doi":"10.1016/j.compmedimag.2024.102434","DOIUrl":"10.1016/j.compmedimag.2024.102434","url":null,"abstract":"<div><p>Accurate segmentation of the pancreas in computed tomography (CT) holds paramount importance in diagnostics, surgical planning, and interventions. Recent studies have proposed supervised deep-learning models for segmentation, but their efficacy relies on the quality and quantity of the training data. Most of such works employed small-scale public datasets, without proving the efficacy of generalization to external datasets. This study explored the optimization of pancreas segmentation accuracy by pinpointing the ideal dataset size, understanding resource implications, examining manual refinement impact, and assessing the influence of anatomical subregions. We present the AIMS-1300 dataset encompassing 1,300 CT scans. Its manual annotation by medical experts required 938 h. A 2.5D UNet was implemented to assess the impact of training sample size on segmentation accuracy by partitioning the original AIMS-1300 dataset into 11 smaller subsets of progressively increasing numerosity. The findings revealed that training sets exceeding 440 CTs did not lead to better segmentation performance. In contrast, nnU-Net and UNet with Attention Gate reached a plateau for 585 CTs. Tests on generalization on the publicly available AMOS-CT dataset confirmed this outcome. As the size of the partition of the AIMS-1300 training set increases, the number of error slices decreases, reaching a minimum with 730 and 440 CTs, for AIMS-1300 and AMOS-CT datasets, respectively. Segmentation metrics on the AIMS-1300 and AMOS-CT datasets improved more on the head than the body and tail of the pancreas as the dataset size increased. By carefully considering the task and the characteristics of the available data, researchers can develop deep learning models without sacrificing performance even with limited data. This could accelerate developing and deploying artificial intelligence tools for pancreas surgery and other surgical data science applications.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"117 ","pages":"Article 102434"},"PeriodicalIF":5.4,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0895611124001113/pdfft?md5=45c57efca48ce73af49c585837730bf7&pid=1-s2.0-S0895611124001113-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142233870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards explainable oral cancer recognition: Screening on imperfect images via Informed Deep Learning and Case-Based Reasoning 实现可解释的口腔癌识别:通过知情深度学习和基于案例的推理对不完美图像进行筛查
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2024-09-11 DOI: 10.1016/j.compmedimag.2024.102433
Marco Parola , Federico A. Galatolo , Gaetano La Mantia , Mario G.C.A. Cimino , Giuseppina Campisi , Olga Di Fede
{"title":"Towards explainable oral cancer recognition: Screening on imperfect images via Informed Deep Learning and Case-Based Reasoning","authors":"Marco Parola ,&nbsp;Federico A. Galatolo ,&nbsp;Gaetano La Mantia ,&nbsp;Mario G.C.A. Cimino ,&nbsp;Giuseppina Campisi ,&nbsp;Olga Di Fede","doi":"10.1016/j.compmedimag.2024.102433","DOIUrl":"10.1016/j.compmedimag.2024.102433","url":null,"abstract":"<div><p>Oral squamous cell carcinoma recognition presents a challenge due to late diagnosis and costly data acquisition. A cost-efficient, computerized screening system is crucial for early disease detection, minimizing the need for expert intervention and expensive analysis. Besides, transparency is essential to align these systems with critical sector applications. Explainable Artificial Intelligence (XAI) provides techniques for understanding models. However, current XAI is mostly data-driven and focused on addressing developers’ requirements of improving models rather than clinical users’ demands for expressing relevant insights. Among different XAI strategies, we propose a solution composed of Case-Based Reasoning paradigm to provide visual output explanations and Informed Deep Learning (IDL) to integrate medical knowledge within the system. A key aspect of our solution lies in its capability to handle data imperfections, including labeling inaccuracies and artifacts, thanks to an ensemble architecture on top of the deep learning (DL) workflow. We conducted several experimental benchmarks on a dataset collected in collaboration with medical centers. Our findings reveal that employing the IDL approach yields an accuracy of 85%, surpassing the 77% accuracy achieved by DL alone. Furthermore, we measured the human-centered explainability of the two approaches and IDL generates explanations more congruent with the clinical user demands.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"117 ","pages":"Article 102433"},"PeriodicalIF":5.4,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0895611124001101/pdfft?md5=96dc2c2aa07fe4189f5653f58184b9b0&pid=1-s2.0-S0895611124001101-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142228887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hematoma expansion prediction in intracerebral hemorrhage patients by using synthesized CT images in an end-to-end deep learning framework 在端到端深度学习框架中使用合成 CT 图像预测脑内出血患者血肿扩大情况
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2024-09-05 DOI: 10.1016/j.compmedimag.2024.102430
Cansu Yalcin , Valeriia Abramova , Mikel Terceño , Arnau Oliver , Yolanda Silva , Xavier Lladó
{"title":"Hematoma expansion prediction in intracerebral hemorrhage patients by using synthesized CT images in an end-to-end deep learning framework","authors":"Cansu Yalcin ,&nbsp;Valeriia Abramova ,&nbsp;Mikel Terceño ,&nbsp;Arnau Oliver ,&nbsp;Yolanda Silva ,&nbsp;Xavier Lladó","doi":"10.1016/j.compmedimag.2024.102430","DOIUrl":"10.1016/j.compmedimag.2024.102430","url":null,"abstract":"<div><p>Spontaneous intracerebral hemorrhage (ICH) is a type of stroke less prevalent than ischemic stroke but associated with high mortality rates. Hematoma expansion (HE) is an increase in the bleeding that affects 30%–38% of hemorrhagic stroke patients. It is observed within 24 h of onset and associated with patient worsening. Clinically it is relevant to detect the patients that will develop HE from their initial computed tomography (CT) scans which could improve patient management and treatment decisions. However, this is a significant challenge due to the predictive nature of the task and its low prevalence, which hinders the availability of large datasets with the required longitudinal information. In this work, we present an end-to-end deep learning framework capable of predicting which cases will exhibit HE using only the initial basal image. We introduce a deep learning framework based on the 2D EfficientNet B0 model to predict the occurrence of HE using initial non-contrasted CT scans and their corresponding lesion annotation as priors. We used an in-house acquired dataset of 122 ICH patients, including 35 HE cases, containing longitudinal CT scans with manual lesion annotations in both basal and follow-up (obtained within 24 h after the basal scan). Experiments were conducted using a 5-fold cross-validation strategy. We addressed the limited data problem by incorporating synthetic images into the training process. To the best of our knowledge, our approach is novel in the field of HE prediction, being the first to use image synthesis to enhance results. We studied different scenarios such as training only with the original scans, using standard image augmentation techniques, and using synthetic image generation. The best performance was achieved by adding five generated versions of each image, along with standard data augmentation, during the training process. This significantly improved (<span><math><mrow><mi>p</mi><mo>=</mo><mn>0</mn><mo>.</mo><mn>0003</mn></mrow></math></span>) the performance obtained with our baseline model using directly the original CT scans from an Accuracy of 0.56 to 0.84, F1-Score of 0.53 to 0.82, Sensitivity of 0.51 to 0.77, and Specificity of 0.60 to 0.91, respectively. The proposed approach shows promising results in predicting HE, especially with the inclusion of synthetically generated images. The obtained results highlight the significance of this research direction, which has the potential to improve the clinical management of patients with hemorrhagic stroke. The code is available at: <span><span>https://github.com/NIC-VICOROB/HE-prediction-SynthCT</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"117 ","pages":"Article 102430"},"PeriodicalIF":5.4,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0895611124001071/pdfft?md5=c274f51b14553bc438e5f61e76ba2a00&pid=1-s2.0-S0895611124001071-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142161734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CycleSGAN: A cycle-consistent and semantics-preserving generative adversarial network for unpaired MR-to-CT image synthesis CycleSGAN:用于无配对 MR-CT 图像合成的周期一致性和语义保留生成对抗网络。
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2024-09-04 DOI: 10.1016/j.compmedimag.2024.102431
Runze Wang , Alexander F. Heimann , Moritz Tannast , Guoyan Zheng
{"title":"CycleSGAN: A cycle-consistent and semantics-preserving generative adversarial network for unpaired MR-to-CT image synthesis","authors":"Runze Wang ,&nbsp;Alexander F. Heimann ,&nbsp;Moritz Tannast ,&nbsp;Guoyan Zheng","doi":"10.1016/j.compmedimag.2024.102431","DOIUrl":"10.1016/j.compmedimag.2024.102431","url":null,"abstract":"<div><p>CycleGAN has been leveraged to synthesize a CT image from an available MR image after trained on unpaired data. Due to the lack of direct constraints between the synthetic and the input images, CycleGAN cannot guarantee structural consistency and often generates inaccurate mappings that shift the anatomy, which is highly undesirable for downstream clinical applications such as MRI-guided radiotherapy treatment planning and PET/MRI attenuation correction. In this paper, we propose a cycle-consistent and semantics-preserving generative adversarial network, referred as CycleSGAN, for unpaired MR-to-CT image synthesis. Our design features a novel and generic way to incorporate semantic information into CycleGAN. This is done by designing a pair of three-player games within the CycleGAN framework where each three-player game consists of one generator and two discriminators to formulate two distinct types of adversarial learning: appearance adversarial learning and structure adversarial learning. These two types of adversarial learning are alternately trained to ensure both realistic image synthesis and semantic structure preservation. Results on unpaired hip MR-to-CT image synthesis show that our method produces better synthetic CT images in both accuracy and visual quality as compared to other state-of-the-art (SOTA) unpaired MR-to-CT image synthesis methods.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"117 ","pages":"Article 102431"},"PeriodicalIF":5.4,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142146765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A lung biopsy path planning algorithm based on the double spherical constraint Pareto and indicators’ importance-correlation degree 基于双球约束帕累托和指标重要性相关度的肺活检路径规划算法
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2024-08-31 DOI: 10.1016/j.compmedimag.2024.102426
Hui Yang , Yu Zhang , Yuhang Gong , Jing Zhang , Ling He , Jianquan Zhong , Ling Tang
{"title":"A lung biopsy path planning algorithm based on the double spherical constraint Pareto and indicators’ importance-correlation degree","authors":"Hui Yang ,&nbsp;Yu Zhang ,&nbsp;Yuhang Gong ,&nbsp;Jing Zhang ,&nbsp;Ling He ,&nbsp;Jianquan Zhong ,&nbsp;Ling Tang","doi":"10.1016/j.compmedimag.2024.102426","DOIUrl":"10.1016/j.compmedimag.2024.102426","url":null,"abstract":"<div><p>Lung cancer has the highest mortality rate among cancers. The commonly used clinical method for diagnosing lung cancer is the CT-guided percutaneous transthoracic lung biopsy (CT-PTLB), but this method requires a high level of clinical experience from doctors. In this work, an automatic path planning method for CT-PTLB is proposed to provide doctors with auxiliary advice on puncture paths. The proposed method comprises three steps: preprocessing, initial path selection, and path evaluation. During preprocessing, the chest organs required for subsequent path planning are segmented. During the initial path selection, a target point selection method for selecting biopsy samples according to biopsy sampling requirements is proposed, which includes a down-sampling algorithm suitable for different nodule shapes. Entry points are selected according to the selected target points and clinical constraints. During the path evaluation, the clinical needs of lung biopsy surgery are first quantified as path evaluation indicators and then divided according to their evaluation perspective into risk and execution indicators. Then, considering the impact of the correlation between indicators, a path scoring system based on the double spherical constraint Pareto and the importance-correlation degree of the indicators is proposed to evaluate the comprehensive performance of the planned paths. The proposed method is retrospectively tested on 6 CT images and prospectively tested on 25 CT images. The experimental results indicate that the method proposed in this work can be used to plan feasible puncture paths for different cases and can serve as an auxiliary tool for lung biopsy surgery.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"117 ","pages":"Article 102426"},"PeriodicalIF":5.4,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142243550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Economical hybrid novelty detection leveraging global aleatoric semantic uncertainty for enhanced MRI-based ACL tear diagnosis 经济型混合新颖性检测利用全局历时语义不确定性,增强基于核磁共振成像的前交叉韧带撕裂诊断。
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2024-08-29 DOI: 10.1016/j.compmedimag.2024.102424
Athanasios Siouras , Serafeim Moustakidis , George Chalatsis , Tuan Aqeel Bohoran , Michael Hantes , Marianna Vlychou , Sotiris Tasoulis , Archontis Giannakidis , Dimitrios Tsaopoulos
{"title":"Economical hybrid novelty detection leveraging global aleatoric semantic uncertainty for enhanced MRI-based ACL tear diagnosis","authors":"Athanasios Siouras ,&nbsp;Serafeim Moustakidis ,&nbsp;George Chalatsis ,&nbsp;Tuan Aqeel Bohoran ,&nbsp;Michael Hantes ,&nbsp;Marianna Vlychou ,&nbsp;Sotiris Tasoulis ,&nbsp;Archontis Giannakidis ,&nbsp;Dimitrios Tsaopoulos","doi":"10.1016/j.compmedimag.2024.102424","DOIUrl":"10.1016/j.compmedimag.2024.102424","url":null,"abstract":"<div><p>This study presents an innovative hybrid deep learning (DL) framework that reformulates the sagittal MRI-based anterior cruciate ligament (ACL) tear classification task as a novelty detection problem to tackle class imbalance. We introduce a highly discriminative novelty score, which leverages the aleatoric semantic uncertainty as this is modeled in the class scores outputted by the YOLOv5-nano object detection (OD) model. To account for tissue continuity, we propose using the global scores (probability vector) when the model is applied to the entire sagittal sequence. The second module of the proposed pipeline constitutes the MINIROCKET timeseries classification model for determining whether a knee has an ACL tear. To better evaluate the generalization capabilities of our approach, we also carry out cross-database testing involving two public databases (KneeMRI and MRNet) and a validation-only database from University General Hospital of Larissa, Greece. Our method consistently outperformed (p-value&lt;0.05) the state-of-the-art (SOTA) approaches on the KneeMRI dataset and achieved better accuracy and sensitivity on the MRNet dataset. It also generalized remarkably good, especially when the model had been trained on KneeMRI. The presented framework generated at least 2.1 times less carbon emissions and consumed at least 2.6 times less energy, when compared with SOTA. The integration of aleatoric semantic uncertainty-based scores into a novelty detection framework, when combined with the use of lightweight OD and timeseries classification models, have the potential to revolutionize the MRI-based injury detection by setting a new precedent in diagnostic precision, speed and environmental sustainability. Our resource-efficient framework offers potential for widespread application.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"117 ","pages":"Article 102424"},"PeriodicalIF":5.4,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0895611124001010/pdfft?md5=08030d7bf6b1d50cc1742401e8b8a65d&pid=1-s2.0-S0895611124001010-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142143465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unsupervised adversarial neural network for enhancing vasculature in photoacoustic tomography images using optical coherence tomography angiography 利用光学相干断层血管成像技术增强光声断层图像中血管的无监督对抗神经网络
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2024-08-28 DOI: 10.1016/j.compmedimag.2024.102425
Yutian Zhong , Zhenyang Liu , Xiaoming Zhang , Zhaoyong Liang , Wufan Chen , Cuixia Dai , Li Qi
{"title":"Unsupervised adversarial neural network for enhancing vasculature in photoacoustic tomography images using optical coherence tomography angiography","authors":"Yutian Zhong ,&nbsp;Zhenyang Liu ,&nbsp;Xiaoming Zhang ,&nbsp;Zhaoyong Liang ,&nbsp;Wufan Chen ,&nbsp;Cuixia Dai ,&nbsp;Li Qi","doi":"10.1016/j.compmedimag.2024.102425","DOIUrl":"10.1016/j.compmedimag.2024.102425","url":null,"abstract":"<div><p>Photoacoustic tomography (PAT) is a powerful imaging modality for visualizing tissue physiology and exogenous contrast agents. However, PAT faces challenges in visualizing deep-seated vascular structures due to light scattering, absorption, and reduced signal intensity with depth. Optical coherence tomography angiography (OCTA) offers high-contrast visualization of vasculature networks, yet its imaging depth is limited to a millimeter scale. Herein, we propose OCPA-Net, a novel unsupervised deep learning method that utilizes the rich vascular feature of OCTA to enhance PAT images. Trained on unpaired OCTA and PAT images, OCPA-Net incorporates a vessel-aware attention module to enhance deep-seated vessel details captured from OCTA. It leverages a domain-adversarial loss function to enforce structural consistency and a novel identity invariant loss to mitigate excessive image content generation. We validate the structural fidelity of OCPA-Net on simulation experiments, and then demonstrate its vascular enhancement performance on <em>in vivo</em> imaging experiments of tumor-bearing mice and contrast-enhanced pregnant mice. The results show the promise of our method for comprehensive vessel-related image analysis in preclinical research applications.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"117 ","pages":"Article 102425"},"PeriodicalIF":5.4,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142099452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cell comparative learning: A cervical cytopathology whole slide image classification method using normal and abnormal cells 细胞比较学习:使用正常和异常细胞的宫颈细胞病理学全玻片图像分类方法
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2024-08-28 DOI: 10.1016/j.compmedimag.2024.102427
Jian Qin , Yongjun He , Yiqin Liang , Lanlan Kang , Jing Zhao , Bo Ding
{"title":"Cell comparative learning: A cervical cytopathology whole slide image classification method using normal and abnormal cells","authors":"Jian Qin ,&nbsp;Yongjun He ,&nbsp;Yiqin Liang ,&nbsp;Lanlan Kang ,&nbsp;Jing Zhao ,&nbsp;Bo Ding","doi":"10.1016/j.compmedimag.2024.102427","DOIUrl":"10.1016/j.compmedimag.2024.102427","url":null,"abstract":"<div><p>Automated cervical cancer screening through computer-assisted diagnosis has shown considerable potential to improve screening accessibility and reduce associated costs and errors. However, classification performance on whole slide images (WSIs) remains suboptimal due to patient-specific variations. To improve the precision of the screening, pathologists not only analyze the characteristics of suspected abnormal cells, but also compare them with normal cells. Motivated by this practice, we propose a novel cervical cell comparative learning method that leverages pathologist knowledge to learn the differences between normal and suspected abnormal cells within the same WSI. Our method employs two pre-trained YOLOX models to detect suspected abnormal and normal cells in a given WSI. A self-supervised model then extracts features for the detected cells. Subsequently, a tailored Transformer encoder fuses the cell features to obtain WSI instance embeddings. Finally, attention-based multi-instance learning is applied to achieve classification. The experimental results show an AUC of 0.9319 for our proposed method. Moreover, the method achieved professional pathologist-level performance, indicating its potential for clinical applications.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"117 ","pages":"Article 102427"},"PeriodicalIF":5.4,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142099356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evidence modeling for reliability learning and interpretable decision-making under multi-modality medical image segmentation 多模态医学影像分割下可靠性学习和可解释决策的证据建模。
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2024-08-07 DOI: 10.1016/j.compmedimag.2024.102422
Jianfeng Zhao , Shuo Li
{"title":"Evidence modeling for reliability learning and interpretable decision-making under multi-modality medical image segmentation","authors":"Jianfeng Zhao ,&nbsp;Shuo Li","doi":"10.1016/j.compmedimag.2024.102422","DOIUrl":"10.1016/j.compmedimag.2024.102422","url":null,"abstract":"<div><p>Reliability learning and interpretable decision-making are crucial for multi-modality medical image segmentation. Although many works have attempted multi-modality medical image segmentation, they rarely explore how much reliability is provided by each modality for segmentation. Moreover, the existing approach of decision-making such as the <span><math><mrow><mi>s</mi><mi>o</mi><mi>f</mi><mi>t</mi><mi>m</mi><mi>a</mi><mi>x</mi></mrow></math></span> function lacks the interpretability for multi-modality fusion. In this study, we proposed a novel approach named contextual discounted evidential network (CDE-Net) for reliability learning and interpretable decision-making under multi-modality medical image segmentation. Specifically, the CDE-Net first models the semantic evidence by uncertainty measurement using the proposed evidential decision-making module. Then, it leverages the contextual discounted fusion layer to learn the reliability provided by each modality. Finally, a multi-level loss function is deployed for the optimization of evidence modeling and reliability learning. Moreover, this study elaborates on the framework interpretability by discussing the consistency between pixel attribution maps and the learned reliability coefficients. Extensive experiments are conducted on both multi-modality brain and liver datasets. The CDE-Net gains high performance with an average Dice score of 0.914 for brain tumor segmentation and 0.913 for liver tumor segmentation, which proves CDE-Net has great potential to facilitate the interpretation of artificial intelligence-based multi-modality medical image fusion.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"116 ","pages":"Article 102422"},"PeriodicalIF":5.4,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141908211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TLF: Triple learning framework for intracranial aneurysms segmentation from unreliable labeled CTA scans TLF:从不可靠的标记 CTA 扫描中分割颅内动脉瘤的三重学习框架
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2024-07-26 DOI: 10.1016/j.compmedimag.2024.102421
Lei Chai , Shuangqian Xue , Daodao Tang , Jixin Liu , Ning Sun , Xiujuan Liu
{"title":"TLF: Triple learning framework for intracranial aneurysms segmentation from unreliable labeled CTA scans","authors":"Lei Chai ,&nbsp;Shuangqian Xue ,&nbsp;Daodao Tang ,&nbsp;Jixin Liu ,&nbsp;Ning Sun ,&nbsp;Xiujuan Liu","doi":"10.1016/j.compmedimag.2024.102421","DOIUrl":"10.1016/j.compmedimag.2024.102421","url":null,"abstract":"<div><p>Intracranial aneurysm (IA) is a prevalent disease that poses a significant threat to human health. The use of computed tomography angiography (CTA) as a diagnostic tool for IAs remains time-consuming and challenging. Deep neural networks (DNNs) have made significant advancements in the field of medical image segmentation. Nevertheless, training large-scale DNNs demands substantial quantities of high-quality labeled data, making the annotation of numerous brain CTA scans a challenging endeavor. To address these challenges and effectively develop a robust IAs segmentation model from a large amount of unlabeled training data, we propose a triple learning framework (TLF). The framework primarily consists of three learning paradigms: pseudo-supervised learning, contrastive learning, and confident learning. This paper introduces an enhanced mean teacher model and voxel-selective strategy to conduct pseudo-supervised learning on unreliable labeled training data. Concurrently, we construct the positive and negative training pairs within the high-level semantic feature space to improve the overall learning efficiency of the TLF through contrastive learning. In addition, a multi-scale confident learning is proposed to correct unreliable labels, which enables the acquisition of broader local structural information instead of relying on individual voxels. To evaluate the effectiveness of our method, we conducted extensive experiments on a self-built database of hundreds of cases of brain CTA scans with IAs. Experimental results demonstrate that our method can effectively learn a robust CTA scan-based IAs segmentation model using unreliable labeled data, outperforming state-of-the-art methods in terms of segmentation accuracy. Codes are released at https://github.com/XueShuangqian/TLF.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"116 ","pages":"Article 102421"},"PeriodicalIF":5.4,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141842424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信