Computerized Medical Imaging and Graphics最新文献

筛选
英文 中文
DSIFNet: Implicit feature network for nasal cavity and vestibule segmentation from 3D head CT DSIFNet:用于从三维头部 CT 中分割鼻腔和前庭的隐含特征网络
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2024-11-12 DOI: 10.1016/j.compmedimag.2024.102462
Yi Lu , Hongjian Gao , Jikuan Qiu , Zihan Qiu , Junxiu Liu , Xiangzhi Bai
{"title":"DSIFNet: Implicit feature network for nasal cavity and vestibule segmentation from 3D head CT","authors":"Yi Lu ,&nbsp;Hongjian Gao ,&nbsp;Jikuan Qiu ,&nbsp;Zihan Qiu ,&nbsp;Junxiu Liu ,&nbsp;Xiangzhi Bai","doi":"10.1016/j.compmedimag.2024.102462","DOIUrl":"10.1016/j.compmedimag.2024.102462","url":null,"abstract":"<div><div>This study is dedicated to accurately segment the nasal cavity and its intricate internal anatomy from head CT images, which is critical for understanding nasal physiology, diagnosing diseases, and planning surgeries. Nasal cavity and it’s anatomical structures such as the sinuses, and vestibule exhibit significant scale differences, with complex shapes and variable microstructures. These features require the segmentation method to have strong cross-scale feature extraction capabilities. To effectively address this challenge, we propose an image segmentation network named the Deeply Supervised Implicit Feature Network (DSIFNet). This network uniquely incorporates an Implicit Feature Function Module Guided by Local and Global Positional Information (LGPI-IFF), enabling effective fusion of features across scales and enhancing the network's ability to recognize details and overall structures. Additionally, we introduce a deep supervision mechanism based on implicit feature functions in the network's decoding phase, optimizing the utilization of multi-scale feature information, thus improving segmentation precision and detail representation. Furthermore, we constructed a dataset comprising 7116 CT volumes (including 1,292,508 slices) and implemented PixPro-based self-supervised pretraining to utilize unlabeled data for enhanced feature extraction. Our tests on nasal cavity and vestibule segmentation, conducted on a dataset comprising 128 head CT volumes (including 34,006 slices), demonstrate the robustness and superior performance of proposed method, achieving leading results across multiple segmentation metrics.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"118 ","pages":"Article 102462"},"PeriodicalIF":5.4,"publicationDate":"2024-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142656920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AFSegNet: few-shot 3D ankle-foot bone segmentation via hierarchical feature distillation and multi-scale attention and fusion AFSegNet:通过分层特征提炼和多尺度关注与融合进行少量三维踝足骨骼分割
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2024-11-01 DOI: 10.1016/j.compmedimag.2024.102456
Yuan Huang , Sven A. Holcombe , Stewart C. Wang , Jisi Tang
{"title":"AFSegNet: few-shot 3D ankle-foot bone segmentation via hierarchical feature distillation and multi-scale attention and fusion","authors":"Yuan Huang ,&nbsp;Sven A. Holcombe ,&nbsp;Stewart C. Wang ,&nbsp;Jisi Tang","doi":"10.1016/j.compmedimag.2024.102456","DOIUrl":"10.1016/j.compmedimag.2024.102456","url":null,"abstract":"<div><div>Accurate segmentation of ankle and foot bones from CT scans is essential for morphological analysis. Ankle and foot bone segmentation challenges due to the blurred bone boundaries, narrow inter-bone gaps, gaps in the cortical shell, and uneven spongy bone textures. Our study endeavors to create a deep learning framework that harnesses advantages of 3D deep learning and tackles the hurdles in accurately segmenting ankle and foot bones from clinical CT scans. A few-shot framework AFSegNet is proposed considering the computational cost, which comprises three 3D deep-learning networks adhering to the principles of progressing from simple to complex tasks and network structures. Specifically, a shallow network first over-segments the foreground, and along with the foreground ground truth are used to supervise a subsequent network to detect the over-segmented regions, which are overwhelmingly inter-bone gaps. The foreground and inter-bone gap probability map are then input into a network with multi-scale attentions and feature fusion, a loss function combining region-, boundary-, and topology-based terms to get the fine-level bone segmentation. AFSegNet is applied to the 16-class segmentation task utilizing 123 in-house CT scans, which only requires a GPU with 24 GB memory since the three sub-networks can be successively and individually trained. AFSegNet achieves a Dice of 0.953 and average surface distance of 0.207. The ablation study and comparison with two basic state-of-the-art networks indicates the effectiveness of the progressively distilled features, attention and feature fusion modules, and hybrid loss functions, with the mean surface distance error decreased up to 50 %.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"118 ","pages":"Article 102456"},"PeriodicalIF":5.4,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142592954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VLFATRollout: Fully transformer-based classifier for retinal OCT volumes VLFATRollout:完全基于变换器的视网膜 OCT 容量分类器。
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2024-10-29 DOI: 10.1016/j.compmedimag.2024.102452
Marzieh Oghbaie , Teresa Araújo , Ursula Schmidt-Erfurth , Hrvoje Bogunović
{"title":"VLFATRollout: Fully transformer-based classifier for retinal OCT volumes","authors":"Marzieh Oghbaie ,&nbsp;Teresa Araújo ,&nbsp;Ursula Schmidt-Erfurth ,&nbsp;Hrvoje Bogunović","doi":"10.1016/j.compmedimag.2024.102452","DOIUrl":"10.1016/j.compmedimag.2024.102452","url":null,"abstract":"<div><h3>Background and Objective:</h3><div>Despite the promising capabilities of 3D transformer architectures in video analysis, their application to high-resolution 3D medical volumes encounters several challenges. One major limitation is the high number of 3D patches, which reduces the efficiency of the global self-attention mechanisms of transformers. Additionally, background information can distract vision transformers from focusing on crucial areas of the input image, thereby introducing noise into the final representation. Moreover, the variability in the number of slices per volume complicates the development of models capable of processing input volumes of any resolution while simple solutions like subsampling may risk losing essential diagnostic details.</div></div><div><h3>Methods:</h3><div>To address these challenges, we introduce an end-to-end transformer-based framework, variable length feature aggregator transformer rollout (VLFATRollout), to classify volumetric data. The proposed VLFATRollout enjoys several merits. First, the proposed VLFATRollout can effectively mine slice-level fore-background information with the help of transformer’s attention matrices. Second, randomization of volume-wise resolution (i.e. the number of slices) during training enhances the learning capacity of the learnable positional embedding (PE) assigned to each volume slice. This technique allows the PEs to generalize across neighboring slices, facilitating the handling of high-resolution volumes at the test time.</div></div><div><h3>Results:</h3><div>VLFATRollout was thoroughly tested on the retinal optical coherence tomography (OCT) volume classification task, demonstrating a notable average improvement of 5.47% in balanced accuracy over the leading convolutional models for a 5-class diagnostic task. These results emphasize the effectiveness of our framework in enhancing slice-level representation and its adaptability across different volume resolutions, paving the way for advanced transformer applications in medical image analysis. The code is available at <span><span>https://github.com/marziehoghbaie/VLFATRollout/</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"118 ","pages":"Article 102452"},"PeriodicalIF":5.4,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142570320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
WISE: Efficient WSI selection for active learning in histopathology WISE:组织病理学主动学习的高效 WSI 选择
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2024-10-28 DOI: 10.1016/j.compmedimag.2024.102455
Hyeongu Kang , Mujin Kim , Young Sin Ko , Yesung Cho , Mun Yong Yi
{"title":"WISE: Efficient WSI selection for active learning in histopathology","authors":"Hyeongu Kang ,&nbsp;Mujin Kim ,&nbsp;Young Sin Ko ,&nbsp;Yesung Cho ,&nbsp;Mun Yong Yi","doi":"10.1016/j.compmedimag.2024.102455","DOIUrl":"10.1016/j.compmedimag.2024.102455","url":null,"abstract":"<div><div>Deep neural network (DNN) models have been applied to a wide variety of medical image analysis tasks, often with the successful performance outcomes that match those of medical doctors. However, given that even minor errors in a model can impact patients’ life, it is critical that these models are continuously improved. Hence, active learning (AL) has garnered attention as an effective and sustainable strategy for enhancing DNN models for the medical domain. Extant AL research in histopathology has primarily focused on patch datasets derived from whole-slide images (WSIs), a standard form of cancer diagnostic images obtained from a high-resolution scanner. However, this approach has failed to address the selection of WSIs, which can impede the performance improvement of deep learning models and increase the number of WSIs needed to achieve the target performance. This study introduces a WSI-level AL method, termed WSI-informative selection (WISE). WISE is designed to select informative WSIs using a newly formulated WSI-level class distance metric. This method aims to identify diverse and uncertain cases of WSIs, thereby contributing to model performance enhancement. WISE demonstrates state-of-the-art performance across the Colon and Stomach datasets, collected in the real world, as well as the public DigestPath dataset, significantly reducing the required number of WSIs by more than threefold compared to the one-pool dataset setting, which has been dominantly used in the field.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"118 ","pages":"Article 102455"},"PeriodicalIF":5.4,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142553819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RPDNet: A reconstruction-regularized parallel decoders network for rectal tumor and rectum co-segmentation RPDNet:用于直肠肿瘤和直肠共同分割的重建正则化并行解码器网络。
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2024-10-28 DOI: 10.1016/j.compmedimag.2024.102453
WenXiang Huang , Ye Xu , Yuanyuan Wang , Hongtu Zheng , Yi Guo
{"title":"RPDNet: A reconstruction-regularized parallel decoders network for rectal tumor and rectum co-segmentation","authors":"WenXiang Huang ,&nbsp;Ye Xu ,&nbsp;Yuanyuan Wang ,&nbsp;Hongtu Zheng ,&nbsp;Yi Guo","doi":"10.1016/j.compmedimag.2024.102453","DOIUrl":"10.1016/j.compmedimag.2024.102453","url":null,"abstract":"<div><div>Accurate segmentation of rectal cancer tumor and rectum in magnetic resonance imaging (MRI) is significant for tumor precise diagnosis and treatment plans determination. Variable shapes and unclear boundaries of rectal tumors make this task particularly challenging. Only a few studies have explored deep learning networks in rectal tumor segmentation, which mainly adopt the classical encoder-decoder structure. The frequent downsampling operations during feature extraction result in the loss of detailed information, limiting the network's ability to precisely capture the shape and boundary of rectal tumors. This paper proposes a Reconstruction-regularized Parallel Decoder network (RPDNet) to address the problem of information loss and obtain accurate co-segmentation results of both rectal tumor and rectum. RPDNet initially establishes a shared encoder and parallel decoders framework to fully utilize the common knowledge between two segmentation labels while reducing the number of network parameters. An auxiliary reconstruction branch is subsequently introduced by calculating the consistency loss between the reconstructed and input images to preserve sufficient anatomical structure information. Moreover, a non-parameter target-adaptive attention module is proposed to distinguish the unclear boundary by enhancing the feature-level contrast between rectal tumors and normal tissues. The experimental results indicate that the proposed method outperforms state-of-the-art approaches in rectal tumor and rectum segmentation tasks, with Dice coefficients of 84.91 % and 90.36 %, respectively, demonstrating its potential application value in clinical practice.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"118 ","pages":"Article 102453"},"PeriodicalIF":5.4,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142570317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust brain MRI image classification with SIBOW-SVM 利用 SIBOW-SVM 进行稳健的脑部 MRI 图像分类。
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2024-10-24 DOI: 10.1016/j.compmedimag.2024.102451
Liyun Zeng , Hao Helen Zhang
{"title":"Robust brain MRI image classification with SIBOW-SVM","authors":"Liyun Zeng ,&nbsp;Hao Helen Zhang","doi":"10.1016/j.compmedimag.2024.102451","DOIUrl":"10.1016/j.compmedimag.2024.102451","url":null,"abstract":"<div><div>Primary Central Nervous System tumors in the brain are among the most aggressive diseases affecting humans. Early detection and classification of brain tumor types, whether benign or malignant, glial or non-glial, is critical for cancer prevention and treatment, ultimately improving human life expectancy. Magnetic Resonance Imaging (MRI) is the most effective technique for brain tumor detection, generating comprehensive brain scans. However, human examination can be error-prone and inefficient due to the complexity, size, and location variability of brain tumors. Recently, automated classification techniques using machine learning methods, such as Convolutional Neural Networks (CNNs), have demonstrated significantly higher accuracy than manual screening. However, deep learning-based image classification methods, including CNNs, face challenges in estimating class probabilities without proper model calibration (Guo et al., 2017; Minderer et al., 2021). In this paper, we propose a novel brain tumor image classification method called SIBOW-SVM, which integrates the Bag-of-Features model with SIFT feature extraction and weighted Support Vector Machines. This new approach can effectively extract hidden image features, enabling differentiation of various tumor types, provide accurate label predictions, and estimate probabilities of images belonging to each class, offering high-confidence classification decisions. We have also developed scalable and parallelable algorithms to facilitate the practical implementation of SIBOW-SVM for massive image datasets. To benchmark our method, we apply SIBOW-SVM to a public dataset of brain tumor MRI images containing four classes: glioma, meningioma, pituitary, and normal. Our results demonstrate that the new method outperforms state-of-the-art techniques, including CNNs, in terms of uncertainty quantification, classification accuracy, computational efficiency, and data robustness.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"118 ","pages":"Article 102451"},"PeriodicalIF":5.4,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142607468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Active learning based on multi-enhanced views for classification of multiple patterns in lung ultrasound images 基于多增强视图的主动学习,用于肺部超声图像中多种模式的分类。
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2024-10-24 DOI: 10.1016/j.compmedimag.2024.102454
Yuanlu Ni , Yang Cong , Chengqian Zhao , Jinhua Yu , Yin Wang , Guohui Zhou , Mengjun Shen
{"title":"Active learning based on multi-enhanced views for classification of multiple patterns in lung ultrasound images","authors":"Yuanlu Ni ,&nbsp;Yang Cong ,&nbsp;Chengqian Zhao ,&nbsp;Jinhua Yu ,&nbsp;Yin Wang ,&nbsp;Guohui Zhou ,&nbsp;Mengjun Shen","doi":"10.1016/j.compmedimag.2024.102454","DOIUrl":"10.1016/j.compmedimag.2024.102454","url":null,"abstract":"<div><div>There are several main patterns in lung ultrasound (LUS) images, including A-lines, B-lines, consolidation and pleural effusion. LUS images of healthy lungs typically only exhibit A-lines, while other patterns may emerge or coexist in LUS images associated with different lung diseases. The accurate categorization of these primary patterns is pivotal for effective lung disease screening. However, two challenges complicate the classification task: the first is the inherent blurring of feature differences between main patterns due to ultrasound imaging properties; and the second is the potential coexistence of multiple patterns in a single case, with only the most dominant pattern being clinically annotated. To address these challenges, we propose the active learning based on multi-enhanced views (MEVAL) method to achieve more precise pattern classification in LUS. To accentuate feature differences between multiple patterns, we introduce a feature enhancement module by applying vertical linear fitting and k-means clustering. The multi-enhanced views are then employed in parallel with the original images, thus enhancing MEVAL’s awareness of feature differences between multiple patterns. To tackle the patterns coexistence issue, we propose an active learning strategy based on confidence sets and misclassified sets. This strategy enables the network to simultaneously recognize multiple patterns by selectively labeling of a small number of images. Our dataset comprises 5075 LUS images, with approximately 4% exhibiting multiple patterns. Experimental results showcase the effectiveness of the proposed method in the classification task, with accuracy of 98.72%, AUC of 0.9989, sensitivity of 98.76%, and specificity of 98.16%, which outperforms than the state-of-the-art deep learning-based methods. A series of comprehensive ablation studies suggest the effectiveness of each proposed component and show great potential in clinical application.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"118 ","pages":"Article 102454"},"PeriodicalIF":5.4,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142565040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MRI-based vector radiomics for predicting breast cancer HER2 status and its changes after neoadjuvant therapy 基于磁共振成像的载体放射组学用于预测乳腺癌 HER2 状态及其在新辅助治疗后的变化。
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2024-10-17 DOI: 10.1016/j.compmedimag.2024.102443
Lan Zhang , Quan-Xiang Cui , Liang-Qin Zhou , Xin-Yi Wang , Hong-Xia Zhang , Yue-Min Zhu , Xi-Qiao Sang , Zi-Xiang Kuai
{"title":"MRI-based vector radiomics for predicting breast cancer HER2 status and its changes after neoadjuvant therapy","authors":"Lan Zhang ,&nbsp;Quan-Xiang Cui ,&nbsp;Liang-Qin Zhou ,&nbsp;Xin-Yi Wang ,&nbsp;Hong-Xia Zhang ,&nbsp;Yue-Min Zhu ,&nbsp;Xi-Qiao Sang ,&nbsp;Zi-Xiang Kuai","doi":"10.1016/j.compmedimag.2024.102443","DOIUrl":"10.1016/j.compmedimag.2024.102443","url":null,"abstract":"<div><h3>Purpose</h3><div>: To develop a novel MRI-based vector radiomic approach to predict breast cancer (BC) human epidermal growth factor receptor 2 (HER2) status (zero, low, and positive; task 1) and its changes after neoadjuvant therapy (NAT) (positive-to-positive, positive-to-negative, and positive-to-pathologic complete response; task 2).</div></div><div><h3>Materials and Methods</h3><div>: Both dynamic contrast-enhanced (DCE) MRI data and multi-<em>b</em>-value (MBV) diffusion-weighted imaging (DWI) data were acquired in BC patients at two centers. Vector-radiomic and conventional-radiomic features were extracted from both DCE-MRI and MBV-DWI. After feature selection, the following models were built using the retained features and logistic regression: vector model, conventional model, and combined model that integrates the vector-radiomic and conventional-radiomic features. The models’ performances were quantified by the area under the receiver-operating characteristic curve (AUC).</div></div><div><h3>Results:</h3><div>The training/external test set (center 1/2) included 483/361 women. For task 1, the vector model (AUCs=0.73<span><math><mo>∼</mo></math></span>0.86) was superior to (<em>p</em><span><math><mo>&lt;</mo></math></span>.05) the conventional model (AUCs=0.68<span><math><mo>∼</mo></math></span>0.81), and the addition of vector-radiomic features to conventional-radiomic features yielded an incremental predictive value (AUCs=0.80<span><math><mo>∼</mo></math></span>0.90, <span><math><mrow><mi>p</mi><mo>&lt;</mo><mo>.</mo><mn>05</mn></mrow></math></span>). For task 2, the combined MBV-DWI model (AUCs=0.85<span><math><mo>∼</mo></math></span>0.89) performed better than (<span><math><mrow><mi>p</mi><mo>&lt;</mo><mo>.</mo><mn>05</mn></mrow></math></span>) the conventional MBV-DWI model (AUCs=0.73<span><math><mo>∼</mo></math></span>0.82). In addition, for the combined DCE-MRI model and the combined MBV-DWI model, the former (AUCs=0.85<span><math><mo>∼</mo></math></span>0.90) outperformed (<span><math><mrow><mi>p</mi><mo>&lt;</mo><mo>.</mo><mn>05</mn></mrow></math></span>) the latter (AUCs=0.80<span><math><mo>∼</mo></math></span>0.85) in task 1, whereas the latter (AUCs=0.85<span><math><mo>∼</mo></math></span>0.89) outperformed (<span><math><mrow><mi>p</mi><mo>&lt;</mo><mo>.</mo><mn>05</mn></mrow></math></span>) the former (AUCs=0.76<span><math><mo>∼</mo></math></span>0.81) in task 2. The above results are true for the training and external test sets.</div></div><div><h3>Conclusions:</h3><div>MRI-based vector radiomics may predict BC HER2 status and its changes after NAT and provide significant incremental prediction over and above conventional radiomics.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"118 ","pages":"Article 102443"},"PeriodicalIF":5.4,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142479837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A review of AutoML optimization techniques for medical image applications 医学影像应用中的 AutoML 优化技术综述。
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2024-10-16 DOI: 10.1016/j.compmedimag.2024.102441
Muhammad Junaid Ali, Mokhtar Essaid, Laurent Moalic, Lhassane Idoumghar
{"title":"A review of AutoML optimization techniques for medical image applications","authors":"Muhammad Junaid Ali,&nbsp;Mokhtar Essaid,&nbsp;Laurent Moalic,&nbsp;Lhassane Idoumghar","doi":"10.1016/j.compmedimag.2024.102441","DOIUrl":"10.1016/j.compmedimag.2024.102441","url":null,"abstract":"<div><div>Automatic analysis of medical images using machine learning techniques has gained significant importance over the years. A large number of approaches have been proposed for solving different medical image analysis tasks using machine learning and deep learning approaches. These approaches are quite effective thanks to their ability to analyze large volume of medical imaging data. Moreover, they can also identify patterns that may be difficult for human experts to detect. Manually designing and tuning the parameters of these algorithms is a challenging and time-consuming task. Furthermore, designing a generalized model that can handle different imaging modalities is difficult, as each modality has specific characteristics. To solve these problems and automate the whole pipeline of different medical image analysis tasks, numerous Automatic Machine Learning (AutoML) techniques have been proposed. These techniques include Hyper-parameter Optimization (HPO), Neural Architecture Search (NAS), and Automatic Data Augmentation (ADA). This study provides an overview of several AutoML-based approaches for different medical imaging tasks in terms of optimization search strategies. The usage of optimization techniques (evolutionary, gradient-based, Bayesian optimization, etc.) is of significant importance for these AutoML approaches. We comprehensively reviewed existing AutoML approaches, categorized them, and performed a detailed analysis of different proposed approaches. Furthermore, current challenges and possible future research directions are also discussed.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"118 ","pages":"Article 102441"},"PeriodicalIF":5.4,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142570286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Prior knowledge-guided vision-transformer-based unsupervised domain adaptation for intubation prediction in lung disease at one week 以先验知识为指导,基于视觉变换器的无监督领域适配,用于肺病一周内的插管预测。
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2024-10-15 DOI: 10.1016/j.compmedimag.2024.102442
Junlin Yang , John Anderson Garcia Henao , Nicha Dvornek , Jianchun He , Danielle V. Bower , Arno Depotter , Herkus Bajercius , Aurélie Pahud de Mortanges , Chenyu You , Christopher Gange , Roberta Eufrasia Ledda , Mario Silva , Charles S. Dela Cruz , Wolf Hautz , Harald M. Bonel , Mauricio Reyes , Lawrence H. Staib , Alexander Poellinger , James S. Duncan
{"title":"Prior knowledge-guided vision-transformer-based unsupervised domain adaptation for intubation prediction in lung disease at one week","authors":"Junlin Yang ,&nbsp;John Anderson Garcia Henao ,&nbsp;Nicha Dvornek ,&nbsp;Jianchun He ,&nbsp;Danielle V. Bower ,&nbsp;Arno Depotter ,&nbsp;Herkus Bajercius ,&nbsp;Aurélie Pahud de Mortanges ,&nbsp;Chenyu You ,&nbsp;Christopher Gange ,&nbsp;Roberta Eufrasia Ledda ,&nbsp;Mario Silva ,&nbsp;Charles S. Dela Cruz ,&nbsp;Wolf Hautz ,&nbsp;Harald M. Bonel ,&nbsp;Mauricio Reyes ,&nbsp;Lawrence H. Staib ,&nbsp;Alexander Poellinger ,&nbsp;James S. Duncan","doi":"10.1016/j.compmedimag.2024.102442","DOIUrl":"10.1016/j.compmedimag.2024.102442","url":null,"abstract":"<div><div>Data-driven approaches have achieved great success in various medical image analysis tasks. However, fully-supervised data-driven approaches require unprecedentedly large amounts of labeled data and often suffer from poor generalization to unseen new data due to domain shifts. Various unsupervised domain adaptation (UDA) methods have been actively explored to solve these problems. Anatomical and spatial priors in medical imaging are common and have been incorporated into data-driven approaches to ease the need for labeled data as well as to achieve better generalization and interpretation. Inspired by the effectiveness of recent transformer-based methods in medical image analysis, the adaptability of transformer-based models has been investigated. How to incorporate prior knowledge for transformer-based UDA models remains under-explored. In this paper, we introduce a prior knowledge-guided and transformer-based unsupervised domain adaptation (PUDA) pipeline. It regularizes the vision transformer attention heads using anatomical and spatial prior information that is shared by both the source and target domain, which provides additional insight into the similarity between the underlying data distribution across domains. Besides the global alignment of class tokens, it assigns local weights to guide the token distribution alignment via adversarial training. We evaluate our proposed method on a clinical outcome prediction task, where Computed Tomography (CT) and Chest X-ray (CXR) data are collected and used to predict the intubation status of patients in a week. Abnormal lesions are regarded as anatomical and spatial prior information for this task and are annotated in the source domain scans. Extensive experiments show the effectiveness of the proposed PUDA method.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"118 ","pages":"Article 102442"},"PeriodicalIF":5.4,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142607466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信