Computerized Medical Imaging and Graphics最新文献

筛选
英文 中文
RPDNet: A reconstruction-regularized parallel decoders network for rectal tumor and rectum co-segmentation RPDNet:用于直肠肿瘤和直肠共同分割的重建正则化并行解码器网络。
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2024-10-28 DOI: 10.1016/j.compmedimag.2024.102453
WenXiang Huang , Ye Xu , Yuanyuan Wang , Hongtu Zheng , Yi Guo
{"title":"RPDNet: A reconstruction-regularized parallel decoders network for rectal tumor and rectum co-segmentation","authors":"WenXiang Huang ,&nbsp;Ye Xu ,&nbsp;Yuanyuan Wang ,&nbsp;Hongtu Zheng ,&nbsp;Yi Guo","doi":"10.1016/j.compmedimag.2024.102453","DOIUrl":"10.1016/j.compmedimag.2024.102453","url":null,"abstract":"<div><div>Accurate segmentation of rectal cancer tumor and rectum in magnetic resonance imaging (MRI) is significant for tumor precise diagnosis and treatment plans determination. Variable shapes and unclear boundaries of rectal tumors make this task particularly challenging. Only a few studies have explored deep learning networks in rectal tumor segmentation, which mainly adopt the classical encoder-decoder structure. The frequent downsampling operations during feature extraction result in the loss of detailed information, limiting the network's ability to precisely capture the shape and boundary of rectal tumors. This paper proposes a Reconstruction-regularized Parallel Decoder network (RPDNet) to address the problem of information loss and obtain accurate co-segmentation results of both rectal tumor and rectum. RPDNet initially establishes a shared encoder and parallel decoders framework to fully utilize the common knowledge between two segmentation labels while reducing the number of network parameters. An auxiliary reconstruction branch is subsequently introduced by calculating the consistency loss between the reconstructed and input images to preserve sufficient anatomical structure information. Moreover, a non-parameter target-adaptive attention module is proposed to distinguish the unclear boundary by enhancing the feature-level contrast between rectal tumors and normal tissues. The experimental results indicate that the proposed method outperforms state-of-the-art approaches in rectal tumor and rectum segmentation tasks, with Dice coefficients of 84.91 % and 90.36 %, respectively, demonstrating its potential application value in clinical practice.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"118 ","pages":"Article 102453"},"PeriodicalIF":5.4,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142570317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust brain MRI image classification with SIBOW-SVM 利用 SIBOW-SVM 进行稳健的脑部 MRI 图像分类。
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2024-10-24 DOI: 10.1016/j.compmedimag.2024.102451
Liyun Zeng , Hao Helen Zhang
{"title":"Robust brain MRI image classification with SIBOW-SVM","authors":"Liyun Zeng ,&nbsp;Hao Helen Zhang","doi":"10.1016/j.compmedimag.2024.102451","DOIUrl":"10.1016/j.compmedimag.2024.102451","url":null,"abstract":"<div><div>Primary Central Nervous System tumors in the brain are among the most aggressive diseases affecting humans. Early detection and classification of brain tumor types, whether benign or malignant, glial or non-glial, is critical for cancer prevention and treatment, ultimately improving human life expectancy. Magnetic Resonance Imaging (MRI) is the most effective technique for brain tumor detection, generating comprehensive brain scans. However, human examination can be error-prone and inefficient due to the complexity, size, and location variability of brain tumors. Recently, automated classification techniques using machine learning methods, such as Convolutional Neural Networks (CNNs), have demonstrated significantly higher accuracy than manual screening. However, deep learning-based image classification methods, including CNNs, face challenges in estimating class probabilities without proper model calibration (Guo et al., 2017; Minderer et al., 2021). In this paper, we propose a novel brain tumor image classification method called SIBOW-SVM, which integrates the Bag-of-Features model with SIFT feature extraction and weighted Support Vector Machines. This new approach can effectively extract hidden image features, enabling differentiation of various tumor types, provide accurate label predictions, and estimate probabilities of images belonging to each class, offering high-confidence classification decisions. We have also developed scalable and parallelable algorithms to facilitate the practical implementation of SIBOW-SVM for massive image datasets. To benchmark our method, we apply SIBOW-SVM to a public dataset of brain tumor MRI images containing four classes: glioma, meningioma, pituitary, and normal. Our results demonstrate that the new method outperforms state-of-the-art techniques, including CNNs, in terms of uncertainty quantification, classification accuracy, computational efficiency, and data robustness.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"118 ","pages":"Article 102451"},"PeriodicalIF":5.4,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142607468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Active learning based on multi-enhanced views for classification of multiple patterns in lung ultrasound images 基于多增强视图的主动学习,用于肺部超声图像中多种模式的分类。
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2024-10-24 DOI: 10.1016/j.compmedimag.2024.102454
Yuanlu Ni , Yang Cong , Chengqian Zhao , Jinhua Yu , Yin Wang , Guohui Zhou , Mengjun Shen
{"title":"Active learning based on multi-enhanced views for classification of multiple patterns in lung ultrasound images","authors":"Yuanlu Ni ,&nbsp;Yang Cong ,&nbsp;Chengqian Zhao ,&nbsp;Jinhua Yu ,&nbsp;Yin Wang ,&nbsp;Guohui Zhou ,&nbsp;Mengjun Shen","doi":"10.1016/j.compmedimag.2024.102454","DOIUrl":"10.1016/j.compmedimag.2024.102454","url":null,"abstract":"<div><div>There are several main patterns in lung ultrasound (LUS) images, including A-lines, B-lines, consolidation and pleural effusion. LUS images of healthy lungs typically only exhibit A-lines, while other patterns may emerge or coexist in LUS images associated with different lung diseases. The accurate categorization of these primary patterns is pivotal for effective lung disease screening. However, two challenges complicate the classification task: the first is the inherent blurring of feature differences between main patterns due to ultrasound imaging properties; and the second is the potential coexistence of multiple patterns in a single case, with only the most dominant pattern being clinically annotated. To address these challenges, we propose the active learning based on multi-enhanced views (MEVAL) method to achieve more precise pattern classification in LUS. To accentuate feature differences between multiple patterns, we introduce a feature enhancement module by applying vertical linear fitting and k-means clustering. The multi-enhanced views are then employed in parallel with the original images, thus enhancing MEVAL’s awareness of feature differences between multiple patterns. To tackle the patterns coexistence issue, we propose an active learning strategy based on confidence sets and misclassified sets. This strategy enables the network to simultaneously recognize multiple patterns by selectively labeling of a small number of images. Our dataset comprises 5075 LUS images, with approximately 4% exhibiting multiple patterns. Experimental results showcase the effectiveness of the proposed method in the classification task, with accuracy of 98.72%, AUC of 0.9989, sensitivity of 98.76%, and specificity of 98.16%, which outperforms than the state-of-the-art deep learning-based methods. A series of comprehensive ablation studies suggest the effectiveness of each proposed component and show great potential in clinical application.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"118 ","pages":"Article 102454"},"PeriodicalIF":5.4,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142565040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MRI-based vector radiomics for predicting breast cancer HER2 status and its changes after neoadjuvant therapy 基于磁共振成像的载体放射组学用于预测乳腺癌 HER2 状态及其在新辅助治疗后的变化。
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2024-10-17 DOI: 10.1016/j.compmedimag.2024.102443
Lan Zhang , Quan-Xiang Cui , Liang-Qin Zhou , Xin-Yi Wang , Hong-Xia Zhang , Yue-Min Zhu , Xi-Qiao Sang , Zi-Xiang Kuai
{"title":"MRI-based vector radiomics for predicting breast cancer HER2 status and its changes after neoadjuvant therapy","authors":"Lan Zhang ,&nbsp;Quan-Xiang Cui ,&nbsp;Liang-Qin Zhou ,&nbsp;Xin-Yi Wang ,&nbsp;Hong-Xia Zhang ,&nbsp;Yue-Min Zhu ,&nbsp;Xi-Qiao Sang ,&nbsp;Zi-Xiang Kuai","doi":"10.1016/j.compmedimag.2024.102443","DOIUrl":"10.1016/j.compmedimag.2024.102443","url":null,"abstract":"<div><h3>Purpose</h3><div>: To develop a novel MRI-based vector radiomic approach to predict breast cancer (BC) human epidermal growth factor receptor 2 (HER2) status (zero, low, and positive; task 1) and its changes after neoadjuvant therapy (NAT) (positive-to-positive, positive-to-negative, and positive-to-pathologic complete response; task 2).</div></div><div><h3>Materials and Methods</h3><div>: Both dynamic contrast-enhanced (DCE) MRI data and multi-<em>b</em>-value (MBV) diffusion-weighted imaging (DWI) data were acquired in BC patients at two centers. Vector-radiomic and conventional-radiomic features were extracted from both DCE-MRI and MBV-DWI. After feature selection, the following models were built using the retained features and logistic regression: vector model, conventional model, and combined model that integrates the vector-radiomic and conventional-radiomic features. The models’ performances were quantified by the area under the receiver-operating characteristic curve (AUC).</div></div><div><h3>Results:</h3><div>The training/external test set (center 1/2) included 483/361 women. For task 1, the vector model (AUCs=0.73<span><math><mo>∼</mo></math></span>0.86) was superior to (<em>p</em><span><math><mo>&lt;</mo></math></span>.05) the conventional model (AUCs=0.68<span><math><mo>∼</mo></math></span>0.81), and the addition of vector-radiomic features to conventional-radiomic features yielded an incremental predictive value (AUCs=0.80<span><math><mo>∼</mo></math></span>0.90, <span><math><mrow><mi>p</mi><mo>&lt;</mo><mo>.</mo><mn>05</mn></mrow></math></span>). For task 2, the combined MBV-DWI model (AUCs=0.85<span><math><mo>∼</mo></math></span>0.89) performed better than (<span><math><mrow><mi>p</mi><mo>&lt;</mo><mo>.</mo><mn>05</mn></mrow></math></span>) the conventional MBV-DWI model (AUCs=0.73<span><math><mo>∼</mo></math></span>0.82). In addition, for the combined DCE-MRI model and the combined MBV-DWI model, the former (AUCs=0.85<span><math><mo>∼</mo></math></span>0.90) outperformed (<span><math><mrow><mi>p</mi><mo>&lt;</mo><mo>.</mo><mn>05</mn></mrow></math></span>) the latter (AUCs=0.80<span><math><mo>∼</mo></math></span>0.85) in task 1, whereas the latter (AUCs=0.85<span><math><mo>∼</mo></math></span>0.89) outperformed (<span><math><mrow><mi>p</mi><mo>&lt;</mo><mo>.</mo><mn>05</mn></mrow></math></span>) the former (AUCs=0.76<span><math><mo>∼</mo></math></span>0.81) in task 2. The above results are true for the training and external test sets.</div></div><div><h3>Conclusions:</h3><div>MRI-based vector radiomics may predict BC HER2 status and its changes after NAT and provide significant incremental prediction over and above conventional radiomics.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"118 ","pages":"Article 102443"},"PeriodicalIF":5.4,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142479837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A review of AutoML optimization techniques for medical image applications 医学影像应用中的 AutoML 优化技术综述。
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2024-10-16 DOI: 10.1016/j.compmedimag.2024.102441
Muhammad Junaid Ali, Mokhtar Essaid, Laurent Moalic, Lhassane Idoumghar
{"title":"A review of AutoML optimization techniques for medical image applications","authors":"Muhammad Junaid Ali,&nbsp;Mokhtar Essaid,&nbsp;Laurent Moalic,&nbsp;Lhassane Idoumghar","doi":"10.1016/j.compmedimag.2024.102441","DOIUrl":"10.1016/j.compmedimag.2024.102441","url":null,"abstract":"<div><div>Automatic analysis of medical images using machine learning techniques has gained significant importance over the years. A large number of approaches have been proposed for solving different medical image analysis tasks using machine learning and deep learning approaches. These approaches are quite effective thanks to their ability to analyze large volume of medical imaging data. Moreover, they can also identify patterns that may be difficult for human experts to detect. Manually designing and tuning the parameters of these algorithms is a challenging and time-consuming task. Furthermore, designing a generalized model that can handle different imaging modalities is difficult, as each modality has specific characteristics. To solve these problems and automate the whole pipeline of different medical image analysis tasks, numerous Automatic Machine Learning (AutoML) techniques have been proposed. These techniques include Hyper-parameter Optimization (HPO), Neural Architecture Search (NAS), and Automatic Data Augmentation (ADA). This study provides an overview of several AutoML-based approaches for different medical imaging tasks in terms of optimization search strategies. The usage of optimization techniques (evolutionary, gradient-based, Bayesian optimization, etc.) is of significant importance for these AutoML approaches. We comprehensively reviewed existing AutoML approaches, categorized them, and performed a detailed analysis of different proposed approaches. Furthermore, current challenges and possible future research directions are also discussed.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"118 ","pages":"Article 102441"},"PeriodicalIF":5.4,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142570286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Prior knowledge-guided vision-transformer-based unsupervised domain adaptation for intubation prediction in lung disease at one week 以先验知识为指导,基于视觉变换器的无监督领域适配,用于肺病一周内的插管预测。
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2024-10-15 DOI: 10.1016/j.compmedimag.2024.102442
Junlin Yang , John Anderson Garcia Henao , Nicha Dvornek , Jianchun He , Danielle V. Bower , Arno Depotter , Herkus Bajercius , Aurélie Pahud de Mortanges , Chenyu You , Christopher Gange , Roberta Eufrasia Ledda , Mario Silva , Charles S. Dela Cruz , Wolf Hautz , Harald M. Bonel , Mauricio Reyes , Lawrence H. Staib , Alexander Poellinger , James S. Duncan
{"title":"Prior knowledge-guided vision-transformer-based unsupervised domain adaptation for intubation prediction in lung disease at one week","authors":"Junlin Yang ,&nbsp;John Anderson Garcia Henao ,&nbsp;Nicha Dvornek ,&nbsp;Jianchun He ,&nbsp;Danielle V. Bower ,&nbsp;Arno Depotter ,&nbsp;Herkus Bajercius ,&nbsp;Aurélie Pahud de Mortanges ,&nbsp;Chenyu You ,&nbsp;Christopher Gange ,&nbsp;Roberta Eufrasia Ledda ,&nbsp;Mario Silva ,&nbsp;Charles S. Dela Cruz ,&nbsp;Wolf Hautz ,&nbsp;Harald M. Bonel ,&nbsp;Mauricio Reyes ,&nbsp;Lawrence H. Staib ,&nbsp;Alexander Poellinger ,&nbsp;James S. Duncan","doi":"10.1016/j.compmedimag.2024.102442","DOIUrl":"10.1016/j.compmedimag.2024.102442","url":null,"abstract":"<div><div>Data-driven approaches have achieved great success in various medical image analysis tasks. However, fully-supervised data-driven approaches require unprecedentedly large amounts of labeled data and often suffer from poor generalization to unseen new data due to domain shifts. Various unsupervised domain adaptation (UDA) methods have been actively explored to solve these problems. Anatomical and spatial priors in medical imaging are common and have been incorporated into data-driven approaches to ease the need for labeled data as well as to achieve better generalization and interpretation. Inspired by the effectiveness of recent transformer-based methods in medical image analysis, the adaptability of transformer-based models has been investigated. How to incorporate prior knowledge for transformer-based UDA models remains under-explored. In this paper, we introduce a prior knowledge-guided and transformer-based unsupervised domain adaptation (PUDA) pipeline. It regularizes the vision transformer attention heads using anatomical and spatial prior information that is shared by both the source and target domain, which provides additional insight into the similarity between the underlying data distribution across domains. Besides the global alignment of class tokens, it assigns local weights to guide the token distribution alignment via adversarial training. We evaluate our proposed method on a clinical outcome prediction task, where Computed Tomography (CT) and Chest X-ray (CXR) data are collected and used to predict the intubation status of patients in a week. Abnormal lesions are regarded as anatomical and spatial prior information for this task and are annotated in the source domain scans. Extensive experiments show the effectiveness of the proposed PUDA method.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"118 ","pages":"Article 102442"},"PeriodicalIF":5.4,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142607466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Distance guided generative adversarial network for explainable medical image classifications 用于可解释医学图像分类的距离引导生成对抗网络。
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2024-10-15 DOI: 10.1016/j.compmedimag.2024.102444
Xiangyu Xiong , Yue Sun , Xiaohong Liu , Wei Ke , Chan-Tong Lam , Jiangang Chen , Mingfeng Jiang , Mingwei Wang , Hui Xie , Tong Tong , Qinquan Gao , Hao Chen , Tao Tan
{"title":"Distance guided generative adversarial network for explainable medical image classifications","authors":"Xiangyu Xiong ,&nbsp;Yue Sun ,&nbsp;Xiaohong Liu ,&nbsp;Wei Ke ,&nbsp;Chan-Tong Lam ,&nbsp;Jiangang Chen ,&nbsp;Mingfeng Jiang ,&nbsp;Mingwei Wang ,&nbsp;Hui Xie ,&nbsp;Tong Tong ,&nbsp;Qinquan Gao ,&nbsp;Hao Chen ,&nbsp;Tao Tan","doi":"10.1016/j.compmedimag.2024.102444","DOIUrl":"10.1016/j.compmedimag.2024.102444","url":null,"abstract":"<div><div>Despite the potential benefits of data augmentation for mitigating data insufficiency, traditional augmentation methods primarily rely on prior intra-domain knowledge. On the other hand, advanced generative adversarial networks (GANs) generate inter-domain samples with limited variety. These previous methods make limited contributions to describing the decision boundaries for binary classification. In this paper, we propose a distance-guided GAN (DisGAN) that controls the variation degrees of generated samples in the hyperplane space. Specifically, we instantiate the idea of DisGAN by combining two ways. The first way is vertical distance GAN (VerDisGAN) where the inter-domain generation is conditioned on the vertical distances. The second way is horizontal distance GAN (HorDisGAN) where the intra-domain generation is conditioned on the horizontal distances. Furthermore, VerDisGAN can produce the class-specific regions by mapping the source images to the hyperplane. Experimental results show that DisGAN consistently outperforms the GAN-based augmentation methods with explainable binary classification. The proposed method can apply to different classification architectures and has the potential to extend to multi-class classification. We provide the code in <span><span>https://github.com/yXiangXiong/DisGAN</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"118 ","pages":"Article 102444"},"PeriodicalIF":5.4,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142479836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An anthropomorphic diagnosis system of pulmonary nodules using weak annotation-based deep learning 利用基于弱注释的深度学习的肺结节拟人化诊断系统。
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2024-10-10 DOI: 10.1016/j.compmedimag.2024.102438
Lipeng Xie , Yongrui Xu , Mingfeng Zheng , Yundi Chen , Min Sun , Michael A. Archer , Wenjun Mao , Yubing Tong , Yuan Wan
{"title":"An anthropomorphic diagnosis system of pulmonary nodules using weak annotation-based deep learning","authors":"Lipeng Xie ,&nbsp;Yongrui Xu ,&nbsp;Mingfeng Zheng ,&nbsp;Yundi Chen ,&nbsp;Min Sun ,&nbsp;Michael A. Archer ,&nbsp;Wenjun Mao ,&nbsp;Yubing Tong ,&nbsp;Yuan Wan","doi":"10.1016/j.compmedimag.2024.102438","DOIUrl":"10.1016/j.compmedimag.2024.102438","url":null,"abstract":"<div><div>The accurate categorization of lung nodules in CT scans is an essential aspect in the prompt detection and diagnosis of lung cancer. The categorization of grade and texture for nodules is particularly significant since it can aid radiologists and clinicians to make better-informed decisions concerning the management of nodules. However, currently existing nodule classification techniques have a singular function of nodule classification and rely on an extensive amount of high-quality annotation data, which does not meet the requirements of clinical practice. To address this issue, we develop an anthropomorphic diagnosis system of pulmonary nodules (PN) based on deep learning (DL) that is trained by weak annotation data and has comparable performance to full-annotation based diagnosis systems. The proposed system uses DL models to classify PNs (benign vs. malignant) with weak annotations, which eliminates the need for time-consuming and labor-intensive manual annotations of PNs. Moreover, the PN classification networks, augmented with handcrafted shape features acquired through the ball-scale transform technique, demonstrate capability to differentiate PNs with diverse labels, including pure ground-glass opacities, part-solid nodules, and solid nodules. Through 5-fold cross-validation on two datasets, the system achieved the following results: (1) an Area Under Curve (AUC) of 0.938 for PN localization and an AUC of 0.912 for PN differential diagnosis on the LIDC-IDRI dataset of 814 testing cases, (2) an AUC of 0.943 for PN localization and an AUC of 0.815 for PN differential diagnosis on the in-house dataset of 822 testing cases. In summary, our system demonstrates efficient localization and differential diagnosis of PNs in a resource limited environment, and thus could be translated into clinical use in the future.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"118 ","pages":"Article 102438"},"PeriodicalIF":5.4,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142479835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Corrigendum to ‘Development and evaluation of an integrated model based on a deep segmentation network and demography-added radiomics algorithm for segmentation and diagnosis of early lung adenocarcinoma’ [Computerized Medical Imaging and Graphics Volume 109 (2023) 102299] 基于深度分割网络和人口统计学附加放射组学算法的综合模型在早期肺腺癌分割和诊断中的开发与评估》[《计算机医学影像与图形学》第 109 (2023) 102299 卷]更正。
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2024-10-02 DOI: 10.1016/j.compmedimag.2024.102428
Juyoung Lee , Jaehee Chun , Hojin Kim , Jin Sung Kim , Seong Yong Park
{"title":"Corrigendum to ‘Development and evaluation of an integrated model based on a deep segmentation network and demography-added radiomics algorithm for segmentation and diagnosis of early lung adenocarcinoma’ [Computerized Medical Imaging and Graphics Volume 109 (2023) 102299]","authors":"Juyoung Lee ,&nbsp;Jaehee Chun ,&nbsp;Hojin Kim ,&nbsp;Jin Sung Kim ,&nbsp;Seong Yong Park","doi":"10.1016/j.compmedimag.2024.102428","DOIUrl":"10.1016/j.compmedimag.2024.102428","url":null,"abstract":"","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"120 ","pages":"Article 102428"},"PeriodicalIF":5.4,"publicationDate":"2024-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142373462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MultiNet 2.0: A lightweight attention-based deep learning network for stenosis measurement in carotid ultrasound scans and cardiovascular risk assessment MultiNet 2.0:基于注意力的轻量级深度学习网络,用于颈动脉超声扫描和心血管风险评估中的狭窄测量。
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2024-10-01 DOI: 10.1016/j.compmedimag.2024.102437
Mainak Biswas , Luca Saba , Mannudeep Kalra , Rajesh Singh , J. Fernandes e Fernandes , Vijay Viswanathan , John R. Laird , Laura E. Mantella , Amer M. Johri , Mostafa M. Fouda , Jasjit S. Suri
{"title":"MultiNet 2.0: A lightweight attention-based deep learning network for stenosis measurement in carotid ultrasound scans and cardiovascular risk assessment","authors":"Mainak Biswas ,&nbsp;Luca Saba ,&nbsp;Mannudeep Kalra ,&nbsp;Rajesh Singh ,&nbsp;J. Fernandes e Fernandes ,&nbsp;Vijay Viswanathan ,&nbsp;John R. Laird ,&nbsp;Laura E. Mantella ,&nbsp;Amer M. Johri ,&nbsp;Mostafa M. Fouda ,&nbsp;Jasjit S. Suri","doi":"10.1016/j.compmedimag.2024.102437","DOIUrl":"10.1016/j.compmedimag.2024.102437","url":null,"abstract":"<div><h3>Background</h3><div>Cardiovascular diseases (CVD) cause 19 million fatalities each year and cost nations billions of dollars. Surrogate biomarkers are established methods for CVD risk stratification; however, manual inspection is costly, cumbersome, and error-prone. The contemporary artificial intelligence (AI) tools for segmentation and risk prediction, including older deep learning (DL) networks employ simple merge connections which may result in semantic loss of information and hence low in accuracy.</div></div><div><h3>Methodology</h3><div>We hypothesize that DL networks enhanced with attention mechanisms can do better segmentation than older DL models. The attention mechanism can concentrate on relevant features aiding the model in better understanding and interpreting images. This study proposes MultiNet 2.0 (AtheroPoint, Roseville, CA, USA), two attention networks have been used to segment the lumen from common carotid artery (CCA) ultrasound images and predict CVD risks.</div></div><div><h3>Results</h3><div>The database consisted of 407 ultrasound CCA images of both the left and right sides taken from 204 patients. Two experts were hired to delineate borders on the 407 images, generating two ground truths (GT1 and GT2). The results were far better than contemporary models. The lumen dimension (LD) error for GT1 and GT2 were 0.13±0.08 and 0.16±0.07 mm, respectively, the best in market. The AUC for low, moderate and high-risk patients’ detection from stenosis data for GT1 were 0.88, 0.98, and 1.00 respectively. Similarly, for GT2, the AUC values for low, moderate, and high-risk patient detection were 0.93, 0.97, and 1.00, respectively.</div><div>The system can be fully adopted for clinical practice in AtheroEdge™ model by AtheroPoint, Roseville, CA, USA.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"117 ","pages":"Article 102437"},"PeriodicalIF":5.4,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142394947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信