Computerized Medical Imaging and Graphics最新文献

筛选
英文 中文
A multi-view contrastive learning and semi-supervised self-distillation framework for early recurrence prediction in ovarian cancer 用于卵巢癌早期复发预测的多视角对比学习和半监督自馏框架
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-01-01 DOI: 10.1016/j.compmedimag.2024.102477
Chi Dong , Yujiao Wu , Bo Sun , Jiayi Bo , Yufei Huang , Yikang Geng , Qianhui Zhang , Ruixiang Liu , Wei Guo , Xingling Wang , Xiran Jiang
{"title":"A multi-view contrastive learning and semi-supervised self-distillation framework for early recurrence prediction in ovarian cancer","authors":"Chi Dong ,&nbsp;Yujiao Wu ,&nbsp;Bo Sun ,&nbsp;Jiayi Bo ,&nbsp;Yufei Huang ,&nbsp;Yikang Geng ,&nbsp;Qianhui Zhang ,&nbsp;Ruixiang Liu ,&nbsp;Wei Guo ,&nbsp;Xingling Wang ,&nbsp;Xiran Jiang","doi":"10.1016/j.compmedimag.2024.102477","DOIUrl":"10.1016/j.compmedimag.2024.102477","url":null,"abstract":"<div><h3>Objective</h3><div>This study presents a novel framework that integrates contrastive learning and knowledge distillation to improve early ovarian cancer (OC) recurrence prediction, addressing the challenges posed by limited labeled data and tumor heterogeneity.</div></div><div><h3>Methods</h3><div>The research utilized CT imaging data from 585 OC patients, including 142 cases with complete follow-up information and 125 cases with unknown recurrence status. To pre-train the teacher network, 318 unlabeled images were sourced from public datasets (TCGA-OV and PLAGH-202-OC). Multi-view contrastive learning (MVCL) was employed to generate multi-view 2D tumor slices, enhancing the teacher network’s ability to extract features from complex, heterogeneous tumors with high intra-class variability. Building on this foundation, the proposed semi-supervised multi-task self-distillation (Semi-MTSD) framework integrated OC subtyping as an auxiliary task using multi-task learning (MTL). This approach allowed the co-training of a student network for recurrence prediction, leveraging both labeled and unlabeled data to improve predictive performance in data-limited settings. The student network's performance was assessed using preoperative CT images with known recurrence outcomes. Evaluation metrics included area under the receiver operating characteristic curve (AUC), accuracy (ACC), sensitivity (SEN), specificity (SPE), F1 score, floating-point operations (FLOPs), parameter count, training time, inference time, and mean corruption error (mCE).</div></div><div><h3>Results</h3><div>The proposed framework achieved an ACC of 0.862, an AUC of 0.916, a SPE of 0.895, and an F1 score of 0.831, surpassing existing methods for OC recurrence prediction. Comparative and ablation studies validated the model’s robustness, particularly in scenarios characterized by data scarcity and tumor heterogeneity.</div></div><div><h3>Conclusion</h3><div>The MVCL and Semi-MTSD framework demonstrates significant advancements in OC recurrence prediction, showcasing strong generalization capabilities in complex, data-constrained environments. This approach offers a promising pathway toward more personalized treatment strategies for OC patients.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"119 ","pages":"Article 102477"},"PeriodicalIF":5.4,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142824120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Utilizing domain knowledge to improve the classification of intravenous contrast phase of CT scans 利用领域知识改进CT扫描静脉对比期的分类。
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-01-01 DOI: 10.1016/j.compmedimag.2024.102458
Liangchen Liu , Jianfei Liu , Bikash Santra , Christopher Parnell , Pritam Mukherjee , Tejas Mathai , Yingying Zhu , Akshaya Anand , Ronald M. Summers
{"title":"Utilizing domain knowledge to improve the classification of intravenous contrast phase of CT scans","authors":"Liangchen Liu ,&nbsp;Jianfei Liu ,&nbsp;Bikash Santra ,&nbsp;Christopher Parnell ,&nbsp;Pritam Mukherjee ,&nbsp;Tejas Mathai ,&nbsp;Yingying Zhu ,&nbsp;Akshaya Anand ,&nbsp;Ronald M. Summers","doi":"10.1016/j.compmedimag.2024.102458","DOIUrl":"10.1016/j.compmedimag.2024.102458","url":null,"abstract":"<div><div>Multiple intravenous contrast phases of CT scans are commonly used in clinical practice to facilitate disease diagnosis. However, contrast phase information is commonly missing or incorrect due to discrepancies in CT series descriptions and imaging practices. This work aims to develop a classification algorithm to automatically determine the contrast phase of a CT scan. We hypothesize that image intensities of key organs (e.g. aorta, inferior vena cava) affected by contrast enhancement are inherent feature information to decide the contrast phase. These organs are segmented by TotalSegmentator followed by generating intensity features on each segmented organ region. Two internal and one external dataset were collected to validate the classification accuracy. In comparison with the baseline ResNet classification method that did not make use of key organs features, the proposed method achieved the comparable accuracy of 92.5% and F1 score of 92.5% in one internal dataset. The accuracy was improved from 63.9% to 79.8% and F1 score from 43.9% to 65.0% using the proposed method on the other internal dataset. The accuracy improved from 63.5% to 85.1% and the F1 score from 56.4% to 83.9% on the external dataset. Image intensity features from key organs are critical for improving the classification accuracy of contrast phases of CT scans. The classification method based on these features is robust to different scanners and imaging protocols from different institutes. Our results suggested improved classification accuracy over existing approaches, which advances the application of automatic contrast phase classification toward real clinical practice. The code for this work can be found here: (<span><span>https://github.com/rsummers11/CT_Contrast_Phase_Classifier</span><svg><path></path></svg></span>).</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"119 ","pages":"Article 102458"},"PeriodicalIF":5.4,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142911012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Post-hoc out-of-distribution detection for cardiac MRI segmentation 心脏MRI分割的事后非分布检测。
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-01-01 DOI: 10.1016/j.compmedimag.2024.102476
Tewodros Weldebirhan Arega , Stéphanie Bricq , Fabrice Meriaudeau
{"title":"Post-hoc out-of-distribution detection for cardiac MRI segmentation","authors":"Tewodros Weldebirhan Arega ,&nbsp;Stéphanie Bricq ,&nbsp;Fabrice Meriaudeau","doi":"10.1016/j.compmedimag.2024.102476","DOIUrl":"10.1016/j.compmedimag.2024.102476","url":null,"abstract":"<div><div>In real-world scenarios, medical image segmentation models encounter input images that may deviate from the training images in various ways. These differences can arise from changes in image scanners and acquisition protocols, or even the images can come from a different modality or domain. When the model encounters these out-of-distribution (OOD) images, it can behave unpredictably. Therefore, it is important to develop a system that handles such out-of-distribution images to ensure the safe usage of the models in clinical practice. In this paper, we propose a post-hoc out-of-distribution (OOD) detection method that can be used with any pre-trained segmentation model. Our method utilizes multi-scale representations extracted from the encoder blocks of the segmentation model and employs Mahalanobis distance as a metric to measure the similarity between the input image and the in-distribution images. The segmentation model is pre-trained on a publicly available cardiac short-axis cine MRI dataset. The detection performance of the proposed method is evaluated on 13 different OOD datasets, which can be categorized as near, mild, and far OOD datasets based on their similarity to the in-distribution dataset. The results show that our method outperforms state-of-the-art feature space-based and uncertainty-based OOD detection methods across the various OOD datasets. Our method successfully detects near, mild, and far OOD images with high detection accuracy, showcasing the advantage of using the multi-scale and semantically rich representations of the encoder. In addition to the feature-based approach, we also propose a Dice coefficient-based OOD detection method, which demonstrates superior performance for adversarial OOD detection and shows a high correlation with segmentation quality. For the uncertainty-based method, despite having a strong correlation with the quality of the segmentation results in the near OOD datasets, they failed to detect mild and far OOD images, indicating the weakness of these methods when the images are more dissimilar. Future work will explore combining Mahalanobis distance and uncertainty scores for improved detection of challenging OOD images that are difficult to segment.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"119 ","pages":"Article 102476"},"PeriodicalIF":5.4,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142866125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive fusion of dual-view for grading prostate cancer 双影像自适应融合在前列腺癌分级中的应用。
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-01-01 DOI: 10.1016/j.compmedimag.2024.102479
Yaolin He , Bowen Li , Ruimin He , Guangming Fu , Dan Sun , Dongyong Shan , Zijian Zhang
{"title":"Adaptive fusion of dual-view for grading prostate cancer","authors":"Yaolin He ,&nbsp;Bowen Li ,&nbsp;Ruimin He ,&nbsp;Guangming Fu ,&nbsp;Dan Sun ,&nbsp;Dongyong Shan ,&nbsp;Zijian Zhang","doi":"10.1016/j.compmedimag.2024.102479","DOIUrl":"10.1016/j.compmedimag.2024.102479","url":null,"abstract":"<div><div>Accurate preoperative grading of prostate cancer is crucial for assisted diagnosis. Multi-parametric magnetic resonance imaging (MRI) is a commonly used non-invasive approach, however, the interpretation of MRI images is still subject to significant subjectivity due to variations in physicians’ expertise and experience. To achieve accurate, non-invasive, and efficient grading of prostate cancer, this paper proposes a deep learning method that adaptively fuses dual-view MRI images. Specifically, a dual-view adaptive fusion model is designed. The model employs encoders to extract embedded features from two MRI sequences: T2-weighted imaging (T2WI) and apparent diffusion coefficient (ADC). The model reconstructs the original input images using the embedded features and adopts a cross-embedding fusion module to adaptively fuse the embedded features from the two views. Adaptive fusion refers to dynamically adjusting the fusion weights of the features from the two views according to different input samples, thereby fully utilizing complementary information. Furthermore, the model adaptively weights the prediction results from the two views based on uncertainty estimation, further enhancing the grading performance. To verify the importance of effective multi-view fusion for prostate cancer grading, extensive experiments are designed. The experiments evaluate the performance of single-view models, dual-view models, and state-of-the-art multi-view fusion algorithms. The results demonstrate that the proposed dual-view adaptive fusion method achieves the best grading performance, confirming its effectiveness for assisted grading diagnosis of prostate cancer. This study provides a novel deep learning solution for preoperative grading of prostate cancer, which has the potential to assist clinical physicians in making more accurate diagnostic decisions and has significant clinical application value.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"119 ","pages":"Article 102479"},"PeriodicalIF":5.4,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142873356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Head pose-assisted localization of facial landmarks for enhanced fast registration in skull base surgery 在颅底手术中,头部姿势辅助定位面部标志增强快速定位。
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2024-12-30 DOI: 10.1016/j.compmedimag.2024.102483
Yifei Yang , Jingfan Fan , Tianyu Fu , Deqiang Xiao , Dongsheng Ma , Hong Song , Zhengkai Feng , Youping Liu , Jian Yang
{"title":"Head pose-assisted localization of facial landmarks for enhanced fast registration in skull base surgery","authors":"Yifei Yang ,&nbsp;Jingfan Fan ,&nbsp;Tianyu Fu ,&nbsp;Deqiang Xiao ,&nbsp;Dongsheng Ma ,&nbsp;Hong Song ,&nbsp;Zhengkai Feng ,&nbsp;Youping Liu ,&nbsp;Jian Yang","doi":"10.1016/j.compmedimag.2024.102483","DOIUrl":"10.1016/j.compmedimag.2024.102483","url":null,"abstract":"<div><div>In skull base surgery, the method of using a probe to draw or 3D scanners to acquire intraoperative facial point clouds for spatial registration presents several issues. Manual manipulation results in inefficiency and poor consistency. Traditional registration algorithms based on point clouds are highly dependent on the initial pose. The complexity of registration algorithms can also extend the required time. To address these issues, we used an RGB-D camera to capture real-time facial point clouds during surgery. The initial registration of the 3D model reconstructed from preoperative CT/MR images and the point cloud collected during surgery is accomplished through corresponding facial landmarks. The facial point clouds collected intraoperatively often contain rotations caused by the free-angle camera. Benefit from the close spatial geometric relationship between head pose and facial landmarks coordinates, we propose a facial landmarks localization network assisted by estimating head pose. The shared representation head pose estimation module boosts network performance by enhancing its perception of global facial features. The proposed network facilitates the localization of landmark points in both preoperative and intraoperative point clouds, enabling rapid automatic registration. A free-view human facial landmarks dataset called 3D-FVL was synthesized from clinical CT images for training. The proposed network achieves leading localization accuracy and robustness on two public datasets and the 3D-FVL. In clinical experiments, using the Artec Eva scanner, the trained network achieved a concurrent reduction in average registration time to 0.28 s, with an average registration error of 2.33 mm. The proposed method significantly reduced registration time, while meeting clinical accuracy requirements for surgical navigation. Our research will help to improving the efficiency and quality of skull base surgery.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"120 ","pages":"Article 102483"},"PeriodicalIF":5.4,"publicationDate":"2024-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142958412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Attention incorporated network for sharing low-rank, image and k-space information during MR image reconstruction to achieve single breath-hold cardiac Cine imaging 注意在MR图像重建过程中引入网络共享低秩、图像和k空间信息,实现单次屏气心脏电影成像。
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2024-12-28 DOI: 10.1016/j.compmedimag.2024.102475
Siying Xu , Kerstin Hammernik , Andreas Lingg , Jens Kübler , Patrick Krumm , Daniel Rueckert , Sergios Gatidis , Thomas Küstner
{"title":"Attention incorporated network for sharing low-rank, image and k-space information during MR image reconstruction to achieve single breath-hold cardiac Cine imaging","authors":"Siying Xu ,&nbsp;Kerstin Hammernik ,&nbsp;Andreas Lingg ,&nbsp;Jens Kübler ,&nbsp;Patrick Krumm ,&nbsp;Daniel Rueckert ,&nbsp;Sergios Gatidis ,&nbsp;Thomas Küstner","doi":"10.1016/j.compmedimag.2024.102475","DOIUrl":"10.1016/j.compmedimag.2024.102475","url":null,"abstract":"<div><div>Cardiac Cine Magnetic Resonance Imaging (MRI) provides an accurate assessment of heart morphology and function in clinical practice. However, MRI requires long acquisition times, with recent deep learning-based methods showing great promise to accelerate imaging and enhance reconstruction quality. Existing networks exhibit some common limitations that constrain further acceleration possibilities, including single-domain learning, reliance on a single regularization term, and equal feature contribution. To address these limitations, we propose to embed information from multiple domains, including low-rank, image, and k-space, in a novel deep learning network for MRI reconstruction, which we denote as A-LIKNet. A-LIKNet adopts a parallel-branch structure, enabling independent learning in the k-space and image domain. Coupled information sharing layers realize the information exchange between domains. Furthermore, we introduce attention mechanisms into the network to assign greater weights to more critical coils or important temporal frames. Training and testing were conducted on an in-house dataset, including 91 cardiovascular patients and 38 healthy subjects scanned with 2D cardiac Cine using retrospective undersampling. Additionally, we evaluated A-LIKNet on the real-time prospectively undersampled data from the OCMR dataset. The results demonstrate that our proposed A-LIKNet outperforms existing methods and provides high-quality reconstructions. The network can effectively reconstruct highly retrospectively undersampled dynamic MR images up to <span><math><mrow><mn>24</mn><mo>×</mo></mrow></math></span> accelerations, indicating its potential for single breath-hold imaging.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"120 ","pages":"Article 102475"},"PeriodicalIF":5.4,"publicationDate":"2024-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142985476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PET-based lesion graphs meet clinical data: An interpretable cross-attention framework for DLBCL treatment response prediction 基于pet的病变图符合临床数据:可解释的DLBCL治疗反应预测的交叉注意框架。
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2024-12-25 DOI: 10.1016/j.compmedimag.2024.102481
Oriane Thiery , Mira Rizkallah , Clément Bailly , Caroline Bodet-Milin , Emmanuel Itti , René-Olivier Casasnovas , Steven Le Gouill , Thomas Carlier , Diana Mateus
{"title":"PET-based lesion graphs meet clinical data: An interpretable cross-attention framework for DLBCL treatment response prediction","authors":"Oriane Thiery ,&nbsp;Mira Rizkallah ,&nbsp;Clément Bailly ,&nbsp;Caroline Bodet-Milin ,&nbsp;Emmanuel Itti ,&nbsp;René-Olivier Casasnovas ,&nbsp;Steven Le Gouill ,&nbsp;Thomas Carlier ,&nbsp;Diana Mateus","doi":"10.1016/j.compmedimag.2024.102481","DOIUrl":"10.1016/j.compmedimag.2024.102481","url":null,"abstract":"<div><div>Diffuse Large B-cell Lymphoma (DLBCL) is a lymphatic cancer of steadily growing incidence. Its diagnostic and follow-up rely on the analysis of clinical biomarkers and 18F-Fluorodeoxyglucose (FDG)-PET/CT images. In this context, we target the problem of assisting in the early identification of high-risk DLBCL patients from both images and tabular clinical data. We propose a solution based on a graph neural network model, capable of simultaneously modeling the variable number of lesions across patients, and fusing information from both data modalities and over lesions. Given the distributed nature of DLBCL lesions, we represent the PET image of each patient as an attributed lesion graph. Such lesion-graphs keep all relevant image information while offering a compact tradeoff between the characterization of full images and single lesions. We also design a cross-attention module to fuse the image attributes with clinical indicators, which is particularly challenging given the large difference in dimensionality and prognostic strength of each modality. To this end, we propose several cross-attention configurations, discuss the implications of each design, and experimentally compare their performances. The last module fuses the updated attributes across lesions and makes a probabilistic prediction of the patient’s 2-year progression-free survival (PFS). We carry out the experimental validation of our proposed framework on a prospective multicentric dataset of 545 patients. Experimental results show our framework effectively integrates the multi-lesion image information improving over a model relying only on the most prognostic clinical data. The analysis further shows the interpretable properties inherent to our graph-based design, which enables tracing the decision back to the most important lesions and features.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"120 ","pages":"Article 102481"},"PeriodicalIF":5.4,"publicationDate":"2024-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143015673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
General retinal layer segmentation in OCT images via reinforcement constraint 基于增强约束的普通OCT图像视网膜层分割。
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2024-12-24 DOI: 10.1016/j.compmedimag.2024.102480
Jinbao Hao, Huiqi Li, Shuai Lu, Zeheng Li, Weihang Zhang
{"title":"General retinal layer segmentation in OCT images via reinforcement constraint","authors":"Jinbao Hao,&nbsp;Huiqi Li,&nbsp;Shuai Lu,&nbsp;Zeheng Li,&nbsp;Weihang Zhang","doi":"10.1016/j.compmedimag.2024.102480","DOIUrl":"10.1016/j.compmedimag.2024.102480","url":null,"abstract":"<div><div>The change of layer thickness of retina is closely associated with the development of ocular diseases such as glaucoma and optic disc drusen. Optical coherence tomography (OCT) is a widely used technology to visualize the lamellar structures of retina. Accurate segmentation of retinal lamellar structures is crucial for diagnosis, treatment, and related research of ocular diseases. However, existing studies have focused on improving the segmentation accuracy, they cannot achieve consistent segmentation performance on different types of datasets, such as retinal OCT images with optic disc and interference of diseases. To this end, a general retinal layer segmentation method is presented in this paper. To obtain more continuous and smoother boundaries, feature enhanced decoding module with reinforcement constraint is proposed, fusing boundary prior and distribution prior, and correcting bias in learning process simultaneously. To enhance the model’s perception of the slender retinal structure, position channel attention is introduced, obtaining global dependencies of both space and channel. To handle the imbalanced distribution of retinal OCT images, focal loss is introduced, guiding the model to pay more attention to retinal layers with a smaller proportion. The designed method achieves the state-of-the-art (SOTA) overall performance on five datasets (i.e., MGU, DUKE, NR206, OCTA500 and private dataset).</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"120 ","pages":"Article 102480"},"PeriodicalIF":5.4,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142933309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Computer-assisted diagnosis for axillary lymph node metastasis of early breast cancer based on transformer with dual-modal adaptive mid-term fusion using ultrasound elastography 超声弹性成像双模态自适应中期融合变压器对早期乳腺癌腋窝淋巴结转移的计算机辅助诊断
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2024-11-26 DOI: 10.1016/j.compmedimag.2024.102472
Chihao Gong , Yinglan Wu , Guangyuan Zhang , Xuan Liu , Xiaoyao Zhu , Nian Cai , Jian Li
{"title":"Computer-assisted diagnosis for axillary lymph node metastasis of early breast cancer based on transformer with dual-modal adaptive mid-term fusion using ultrasound elastography","authors":"Chihao Gong ,&nbsp;Yinglan Wu ,&nbsp;Guangyuan Zhang ,&nbsp;Xuan Liu ,&nbsp;Xiaoyao Zhu ,&nbsp;Nian Cai ,&nbsp;Jian Li","doi":"10.1016/j.compmedimag.2024.102472","DOIUrl":"10.1016/j.compmedimag.2024.102472","url":null,"abstract":"<div><div>Accurate preoperative qualitative assessment of axillary lymph node metastasis (ALNM) in early breast cancer patients is crucial for precise clinical staging and selection of axillary treatment strategies. Although previous studies have introduced artificial intelligence (AI) to enhance the assessment performance of ALNM, they all focus on the prediction performances of their AI models and neglect the clinical assistance to the radiologists, which brings some issues to the clinical practice. To this end, we propose a human–AI collaboration strategy for ALNM diagnosis of early breast cancer, in which a novel deep learning framework, termed DAMF-former, is designed to assist radiologists in evaluating ALNM. Specifically, the DAMF-former focuses on the axillary region rather than the primary tumor area in previous studies. To mimic the radiologists’ alternative integration of the UE images of the target axillary lymph nodes for comprehensive analysis, adaptive mid-term fusion is proposed to alternatively extract and adaptively fuse the high-level features from the dual-modal UE images (i.e., B-mode ultrasound and Shear Wave Elastography). To further improve the diagnostic outcome of the DAMF-former, an adaptive Youden index scheme is proposed to deal with the fully fused dual-modal UE image features at the end of the framework, which can balance the diagnostic performance in terms of sensitivity and specificity. The clinical experiment indicates that the designed DAMF-former can assist and improve the diagnostic abilities of less-experienced radiologists for ALNM. Especially, the junior radiologists can significantly improve the diagnostic outcome from 0.807 AUC [95% CI: 0.781, 0.830] to 0.883 AUC [95% CI: 0.861, 0.902] (<span><math><mi>P</mi></math></span>-value <span><math><mo>&lt;</mo></math></span>0.0001). Moreover, there are great agreements among radiologists of different levels when assisted by the DAMF-former (Kappa value ranging from 0.805 to 0.895; <span><math><mi>P</mi></math></span>-value <span><math><mo>&lt;</mo></math></span>0.0001), suggesting that less-experienced radiologists can potentially achieve a diagnostic level similar to that of experienced radiologists through human–AI collaboration. This study explores a potential solution to human–AI collaboration for ALNM diagnosis based on UE images.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"119 ","pages":"Article 102472"},"PeriodicalIF":5.4,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142743192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Uncertainty-aware regression model to predict post-operative visual acuity in patients with macular holes 不确定性感知回归模型预测黄斑裂孔患者术后视力
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2024-11-26 DOI: 10.1016/j.compmedimag.2024.102461
Burak Kucukgoz , Ke Zou , Declan C. Murphy , David H. Steel , Boguslaw Obara , Huazhu Fu
{"title":"Uncertainty-aware regression model to predict post-operative visual acuity in patients with macular holes","authors":"Burak Kucukgoz ,&nbsp;Ke Zou ,&nbsp;Declan C. Murphy ,&nbsp;David H. Steel ,&nbsp;Boguslaw Obara ,&nbsp;Huazhu Fu","doi":"10.1016/j.compmedimag.2024.102461","DOIUrl":"10.1016/j.compmedimag.2024.102461","url":null,"abstract":"<div><div>Full-thickness macular holes are a relatively common and visually disabling condition with a prevalence of approximately 0.5% in the over-40-year-old age group. If left untreated, the hole typically enlarges, reducing visual acuity (VA) below the definition of blindness in the eye affected. They are now routinely treated with surgery, which can close the hole and improve vision in most cases. The extent of improvement, however, is variable and dependent on the size of the hole and other features which can be discerned in spectral-domain optical coherence tomography imaging, which is now routinely available in eye clinics globally. Artificial intelligence (AI) models have been developed to enable surgical decision-making and have achieved relatively high predictive performance. However, their black-box behavior is opaque to users and uncertainty associated with their predictions is not typically stated, leading to a lack of trust among clinicians and patients. In this paper, we describe an uncertainty-aware regression model (U-ARM) for predicting VA for people undergoing macular hole surgery using preoperative spectral-domain optical coherence tomography images, achieving an MAE of 6.07, RMSE of 9.11 and R2 of 0.47 in internal tests, and an MAE of 6.49, RMSE of 9.49, and R2 of 0.42 in external tests. In addition to predicting VA following surgery, U-ARM displays its associated uncertainty, a <span><math><mi>p</mi></math></span>-value of &lt;0.005 in internal and external tests, showing the predictions are not due to random chance. We then qualitatively evaluated the performance of U-ARM. Lastly, we demonstrate out-of-sample data performance, generalizing well to data outside the training distribution, low-quality images, and unseen instances not encountered during training. The results show that U-ARM outperforms commonly used methods in terms of prediction and reliability. U-ARM is thus a promising approach for clinical settings and can improve the reliability of AI models in predicting VA.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"119 ","pages":"Article 102461"},"PeriodicalIF":5.4,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142743193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信