Computerized Medical Imaging and Graphics最新文献

筛选
英文 中文
Capturing action triplet correlations for accurate surgical activity recognition 捕捉动作三重关联以准确识别手术活动
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-07-14 DOI: 10.1016/j.compmedimag.2025.102604
Xiaoyang Zou , Derong Yu , Guoyan Zheng
{"title":"Capturing action triplet correlations for accurate surgical activity recognition","authors":"Xiaoyang Zou ,&nbsp;Derong Yu ,&nbsp;Guoyan Zheng","doi":"10.1016/j.compmedimag.2025.102604","DOIUrl":"10.1016/j.compmedimag.2025.102604","url":null,"abstract":"<div><div>Surgical activity recognition is essential for providing real-time, context-aware decision support in the development of computer-assisted surgery systems. To represent a fine-grained surgical activity, an action triplet, defined in the form of <span><math><mo>&lt;</mo></math></span>instrument, verb, target<span><math><mo>&gt;</mo></math></span>, is used. It provides information about three essential components of a surgical action, i.e., the instrument used to perform the action, the verb used to describe the action being performed, and the target tissue with which the instrument is interacting. A key challenge in surgical activity recognition lies in capturing the inherent correlations between action triplets and the associated components. In this paper, to address the challenge, starting with features extracted by a transformers-based spatial–temporal feature extractor with banded causal masks, we propose a novel framework for accurate surgical activity recognition by capturing action triplet correlations at both feature and output levels. At the feature level, we propose a graph convolutional networks (GCNs)-based module, referred as TripletGCN, to capture triplet correlations for feature enhancement. Inspired by the observation that surgeons perform specific operations using corresponding sets of instruments following clinical guidelines, a data-driven triplet correlation matrix is designed to guide information propagation among inter-dependent event nodes in TripletGCN. At the output level, in addition to applying binary cross-entropy loss for supervised learning, we propose an adversarial learning process, denoted as TripletAL, to align the joint triplet distribution between the ground truth labels and the predicted results, thereby further enhancing triplet correlations. To validate the efficacy of the proposed approach, we conducted comprehensive experiments on two publicly available datasets from the CholecTriplet2021 challenge, i.e., the CholecT45 dataset and the CholecT50 dataset. Our method achieves an average mean Average Precision (mAP) of 41.5% on the CholecT45 dataset using 5-fold cross-validation and an average mAP of 42.5% on the CholecT50 dataset using the challenge data split. Besides, we demonstrate the generalization capability of the proposed method for verb-target pair recognition on the publicly available SARAS-MESAD dataset.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"124 ","pages":"Article 102604"},"PeriodicalIF":5.4,"publicationDate":"2025-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144631939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Characterizing and differentiating brain states through a CS-KBRs framework for highlighting the synergy of common and specific brain regions 通过CS-KBRs框架来表征和区分大脑状态,以突出共同和特定大脑区域的协同作用
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-07-14 DOI: 10.1016/j.compmedimag.2025.102609
Di Zhu , Shu Zhang , Sigang Yu , Qilong Yuan , Kui Zhao , Yanqing Kang , Tuo Zhang , Xi Jiang , Tianming Liu
{"title":"Characterizing and differentiating brain states through a CS-KBRs framework for highlighting the synergy of common and specific brain regions","authors":"Di Zhu ,&nbsp;Shu Zhang ,&nbsp;Sigang Yu ,&nbsp;Qilong Yuan ,&nbsp;Kui Zhao ,&nbsp;Yanqing Kang ,&nbsp;Tuo Zhang ,&nbsp;Xi Jiang ,&nbsp;Tianming Liu","doi":"10.1016/j.compmedimag.2025.102609","DOIUrl":"10.1016/j.compmedimag.2025.102609","url":null,"abstract":"<div><div>In the field of neuroscience, understanding the coordination of different brain regions to drive various brain states is critical for revealing the nature of cognitive processes and their manifestation in brain functions and disorders. Despite the promise shown by deep learning methods in brain state classification using fMRI data, their interpretability remains a challenge, particularly in understanding the distinct characteristics of the identified ROIs. This study introduces a novel framework based on the Dynamic Graph Convolutional Neural Network (DGCNN) to identify key brain regions (KBRs) crucial for brain state classification tasks. By dynamically updating the adjacency matrix, this approach more effectively evaluates the importance of each brain region, allowing for the accurate selection of 56 KBRs from 148 regions, which significantly enhance brain state classification performance compared to using all brain regions. To further investigate why KBRs show superior performance, we categorize these KBRs into hub-like Common and Specific regions, forming a CS-KBRs framework, it shows that Common regions act as central hubs with strong connectivity, enabling global integration across the brain, while Specific regions capture localized, task-relevant details that are vital for differentiating particular brain states. This core-peripheral complementary relationship between Common and Specific regions provides a comprehensive representation of both global and local features, which is essential for accurately distinguishing brain states. Our findings reveal that this synergistic mechanism within the CS-KBRs framework not only enhances model accuracy but also offers a deeper understanding of how different brain regions collectively contribute to the expression and differentiation of various brain states.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"124 ","pages":"Article 102609"},"PeriodicalIF":5.4,"publicationDate":"2025-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144686276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A deep learning-based clinical decision support system for glioma grading using ensemble learning and knowledge distillation 基于集成学习和知识蒸馏的神经胶质瘤分级深度学习临床决策支持系统
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-07-10 DOI: 10.1016/j.compmedimag.2025.102602
Yichong Liu , Zhiliang Shi , Chaoyang Xiao , Bo Wang
{"title":"A deep learning-based clinical decision support system for glioma grading using ensemble learning and knowledge distillation","authors":"Yichong Liu ,&nbsp;Zhiliang Shi ,&nbsp;Chaoyang Xiao ,&nbsp;Bo Wang","doi":"10.1016/j.compmedimag.2025.102602","DOIUrl":"10.1016/j.compmedimag.2025.102602","url":null,"abstract":"<div><div>Gliomas are the most common malignant primary brain tumors, and grading their severity, particularly the diagnosis of low-grade gliomas, remains a challenging task for clinicians and radiologists. With advancements in deep learning and medical image processing technologies, the development of Clinical Decision Support Systems (CDSS) for glioma grading offers significant benefits for clinical treatment. This study proposes a CDSS for glioma grading, integrating a novel feature extraction framework. The method is based on combining ensemble learning and knowledge distillation: teacher models were constructed through ensemble learning, while uncertainty-weighted ensemble averaging is applied during student model training to refine knowledge transfer. This approach bridges the teacher-student performance gap, enhancing grading accuracy, reliability, and clinical applicability with lightweight deployment. Experimental results show 85.96 % Accuracy (5.2 % improvement over baseline), with Precision (83.90 %), Recall (87.40 %), and F1-score (83.90 %) increasing by 7.5 %, 5.1 %, and 5.1 % respectively. The teacher-student performance gap is reduced to 3.2 %, confirming effectiveness. Furthermore, the developed CDSS not only ensures rapid and accurate glioma grading but also includes critical features influencing the grading results, seamlessly integrating a methodology for generating comprehensive diagnostic reports. Consequently, the glioma grading CDSS represents a practical clinical decision support tool capable of delivering accurate and efficient auxiliary diagnostic decisions for physicians and patients.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"124 ","pages":"Article 102602"},"PeriodicalIF":5.4,"publicationDate":"2025-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144613823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive batch-fusion self-supervised learning for ultrasound image pretraining 超声图像预训练的自适应批融合自监督学习
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-07-08 DOI: 10.1016/j.compmedimag.2025.102599
Jiansong Zhang , Xiuming Wu , Shunlan Liu , Yuling Fan , Yongjian Chen , Guorong Lyu , Peizhong Liu , Zhonghua Liu , Shaozheng He
{"title":"Adaptive batch-fusion self-supervised learning for ultrasound image pretraining","authors":"Jiansong Zhang ,&nbsp;Xiuming Wu ,&nbsp;Shunlan Liu ,&nbsp;Yuling Fan ,&nbsp;Yongjian Chen ,&nbsp;Guorong Lyu ,&nbsp;Peizhong Liu ,&nbsp;Zhonghua Liu ,&nbsp;Shaozheng He","doi":"10.1016/j.compmedimag.2025.102599","DOIUrl":"10.1016/j.compmedimag.2025.102599","url":null,"abstract":"<div><div>Medical self-supervised learning eliminates the reliance on labels, making feature extraction simple and efficient. The intricate design of pretext tasks in single-modal self-supervised analysis presents challenges, however, compounded by an excessive dependency on data augmentation, leading to a bottleneck in medical self-supervised learning research. Consequently, this paper reanalyzes the feature learnability introduced by data augmentation strategies in medical image self-supervised learning. We introduce an adaptive self-supervised learning data augmentation method from the perspective of batch fusion. Moreover, we propose a conv embedding block for learning the incremental representation between these batches. We tested 5 fused data tasks proposed by previous researchers and it achieved a linear classification protocol accuracy of 94.25% with only 150 self-supervised feature training in Vision Transformer(ViT), which is the best among the same methods. With a detailed ablation study on previous augmentation strategies, the results indicate that the proposed medical data augmentation strategy in this paper effectively represents ultrasound data features in the self-supervised learning process. The code and weights could be found at <span><span>here</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"124 ","pages":"Article 102599"},"PeriodicalIF":5.4,"publicationDate":"2025-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144588839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hierarchical attention fusion of EUS-doppler features for GISTs risk assessment eus -多普勒特征的分层注意力融合用于gist风险评估
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-07-05 DOI: 10.1016/j.compmedimag.2025.102584
QinYue Wei , Yue Gao , Shuyu Liang , Ke Chen , Yuanyuan Wang , Yi Guo
{"title":"Hierarchical attention fusion of EUS-doppler features for GISTs risk assessment","authors":"QinYue Wei ,&nbsp;Yue Gao ,&nbsp;Shuyu Liang ,&nbsp;Ke Chen ,&nbsp;Yuanyuan Wang ,&nbsp;Yi Guo","doi":"10.1016/j.compmedimag.2025.102584","DOIUrl":"10.1016/j.compmedimag.2025.102584","url":null,"abstract":"<div><div>Assessing the preoperative malignancy risk of gastrointestinal stromal tumors (GISTs) is crucial for determining the appropriate treatment plan and prognosis. The current automated diagnosis of GISTs based on endoscopic ultrasound (EUS) pose challenge in stable GISTs classification due to the similarity in structure among different risk levels, thus incorporating blood flow density information from Doppler images is essential to assist in the diagnosis. Meanwhile, the variability of tumor size leads to limitations in feature extraction, as a single receptive field is unable to capture both global and local features, which in turn affects classification accuracy. In this paper, we propose a Hierarchical Attention-based Multimodal Feature Fusion Network (HAMNet) for stable GISTs diagnosis by fusing both structural and blood flow information. First, both EUS and Doppler image features are extracted through specific branches to preserve intra-modal information and the masks are added for location supplementary. Second, they are integrated through an iterative multimodal attention integrator (IMAI) which is designed to utilize supervised blood flow information from Doppler images and selectively enhance structural information from EUS images. This is achieved by emphasizing cross-modal complementary features through an attention mechanism and facilitating further refinement of multimodal information through an iteration strategy. Third, we devise a Hierarchical Multi-Scale Tumor Classification (HMTC) module, which enables the model to accommodate the varying sizes of GISTs by capturing features across different receptive fields. Afterwards, we construct a first GISTs dataset called pEUS-Doppler-GISTs comprising 179 cases with 555 paired endoscopic ultrasound (EUS) and Doppler images and conduct experiments to validate the performance of our HAMNet in preoperative malignancy risk assessment. HAMNet has been demonstrated to outperform other state-of-the-art (SOTA) algorithms, achieving an ACC and AUC of 0.875 and 0.856, respectively. It is noteworthy that the model sensitivity is enhanced by a maximum of 0.196 in comparison to other multimodal methods, indicating its effectiveness in identifying high risk tumors and potential application in GISTs CAD systems.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"124 ","pages":"Article 102584"},"PeriodicalIF":5.4,"publicationDate":"2025-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144595598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AMeta-FD: Adversarial Meta-learning for Few-shot retinal OCT image Despeckling AMeta-FD:基于对抗性元学习的少镜头视网膜OCT图像去斑
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-07-04 DOI: 10.1016/j.compmedimag.2025.102597
Yi Zhou , Tao Peng , Thiara Sana Ahmed , Fei Shi , Weifang Zhu , Dehui Xiang , Leopold Schmetterer , Jianxin Jiang , Bingyao Tan , Xinjian Chen
{"title":"AMeta-FD: Adversarial Meta-learning for Few-shot retinal OCT image Despeckling","authors":"Yi Zhou ,&nbsp;Tao Peng ,&nbsp;Thiara Sana Ahmed ,&nbsp;Fei Shi ,&nbsp;Weifang Zhu ,&nbsp;Dehui Xiang ,&nbsp;Leopold Schmetterer ,&nbsp;Jianxin Jiang ,&nbsp;Bingyao Tan ,&nbsp;Xinjian Chen","doi":"10.1016/j.compmedimag.2025.102597","DOIUrl":"10.1016/j.compmedimag.2025.102597","url":null,"abstract":"<div><div>Speckle noise in Optical coherence tomography (OCT) images compromises the performance of image analysis tasks such as retinal layer boundary detection. Deep learning algorithms have demonstrated the advantage of being more cost-effective and robust compared to hardware solutions and conventional image processing algorithms. However, these methods usually require large training datasets which is time-consuming to acquire. This paper proposes a novel method called <strong>A</strong>dversarial <strong>Meta</strong>-learning for <strong>F</strong>ew-shot raw retinal OCT image <strong>D</strong>especkling (<strong>AMeta-FD</strong>) to reduce speckle noise in OCT images. Our method involves two training phases: (1) adversarial meta-training on synthetic noisy OCT image pairs, and (2) fine-tuning with a small set of raw-clean image pairs containing speckle noise. Additionally, we introduce a new suppression loss to reduce the contribution of non-tissue pixels effectively. The ground truth involved in this study is generated by registering and averaging multiple repeated images. AMeta-FD requires only 60 raw-clean image pairs, which constitute about 12% of whole training dataset, yet it achieves performance on par with traditional transfer training that utilize the entire training dataset. Extensive evaluations show that in terms of signal-to-noise ratio (SNR), AMeta-FD surpasses traditional non-learning-based despeckling methods by at least 15 <span><math><mi>dB</mi></math></span>. It also outperforms the recent meta-learning-based image denoising method, Few-Shot Meta-Denoising (FSMD), by 11.01 <span><math><mi>dB</mi></math></span>, and exceeds our previous best method by 3 <span><math><mi>dB</mi></math></span>. The code for AMeta-FD is available at <span><span>https://github.com/Zhouyi-Zura/AMeta-FD</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"124 ","pages":"Article 102597"},"PeriodicalIF":5.4,"publicationDate":"2025-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144563272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A VVBP data-based pancreatic lesion detection model with noncontrast CT 基于VVBP数据的非对比CT胰腺病变检测模型
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-07-03 DOI: 10.1016/j.compmedimag.2025.102601
Wanzhen Wang , Chenjie Zhou , Xiaoying Chen , Geye Tang , Jianhua Ma , Yi Gao , Shulong Li
{"title":"A VVBP data-based pancreatic lesion detection model with noncontrast CT","authors":"Wanzhen Wang ,&nbsp;Chenjie Zhou ,&nbsp;Xiaoying Chen ,&nbsp;Geye Tang ,&nbsp;Jianhua Ma ,&nbsp;Yi Gao ,&nbsp;Shulong Li","doi":"10.1016/j.compmedimag.2025.102601","DOIUrl":"10.1016/j.compmedimag.2025.102601","url":null,"abstract":"<div><div>Pancreatic cancer (PC) is one of the most aggressive cancers. Noncontrast CT (NCCT) offers a suitable platform for developing early detection algorithms to improve early diagnosis, prognosis, and overall survival rates. The view-by-view back-projection (VVBP) data from the filtered back-projection algorithm reveal that information across different views is correlated, complementary, and often redundant, which may be compressed or overlooked. These data can be interpreted as a 3D decomposition of 2D images, providing a richer representation than individual images. Leveraging these advantages, an NCCT-based pancreatic lesion detection model using VVBP data is proposed. This novel method is designed to process VVBP data into N sparse images. The model comprises three main modules: ResNet50-Unet, which extracts primary features from each sparse image and compensates for information loss from simulated VVBP data by a reconstruction branch; a novel multicross channel-spatial-attention (mcCSA) mechanism, which fuses primary features and facilitates feature interaction and learning in VVBP data; and Faster R-CNN with the weighted candidate bounding box fusion (WCBF) technique, which generates advanced region proposal generation based on integrated VVBP data. The model showed optimal performance when N = 3, outperforming competing methods across most metrics, with recalls of 75.7 % and 90.5 %, precisions of 41.4 % and 66.9 %, F1 scores of 73.5 % and 76.9 %, F2 scores of 64.9 % and 84.5 %, and AP50 values of 56.2 % and 76.9 % at the image and patient levels, respectively. The 90.5 % patient-level recall underscores the model’s clinical potential as an AI tool for early PC detection and screening.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"124 ","pages":"Article 102601"},"PeriodicalIF":5.4,"publicationDate":"2025-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144580962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CT-Mamba: A hybrid convolutional State Space Model for low-dose CT denoising CT- mamba:用于低剂量CT去噪的混合卷积状态空间模型
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-07-03 DOI: 10.1016/j.compmedimag.2025.102595
Linxuan Li , Wenjia Wei , Luyao Yang , Wenwen Zhang , Jiashu Dong , Yahua Liu , Hongshi Huang , Wei Zhao
{"title":"CT-Mamba: A hybrid convolutional State Space Model for low-dose CT denoising","authors":"Linxuan Li ,&nbsp;Wenjia Wei ,&nbsp;Luyao Yang ,&nbsp;Wenwen Zhang ,&nbsp;Jiashu Dong ,&nbsp;Yahua Liu ,&nbsp;Hongshi Huang ,&nbsp;Wei Zhao","doi":"10.1016/j.compmedimag.2025.102595","DOIUrl":"10.1016/j.compmedimag.2025.102595","url":null,"abstract":"<div><div>Low-dose CT (LDCT) significantly reduces the radiation dose received by patients, however, dose reduction introduces additional noise and artifacts. Currently, denoising methods based on convolutional neural networks (CNNs) face limitations in long-range modeling capabilities, while Transformer-based denoising methods, although capable of powerful long-range modeling, suffer from high computational complexity. Furthermore, the denoised images predicted by deep learning-based techniques inevitably exhibit differences in noise distribution compared to normal-dose CT (NDCT) images, which can also impact the final image quality and diagnostic outcomes. This paper proposes CT-Mamba, a hybrid convolutional State Space Model for LDCT image denoising. The model combines the local feature extraction advantages of CNNs with Mamba’s strength in capturing long-range dependencies, enabling it to capture both local details and global context. Additionally, we introduce an innovative spatially coherent Z-shaped scanning scheme to ensure spatial continuity between adjacent pixels in the image. We design a Mamba-driven deep noise power spectrum (NPS) loss function to guide model training, ensuring that the noise texture of the denoised LDCT images closely resembles that of NDCT images, thereby enhancing overall image quality and diagnostic value. Experimental results have demonstrated that CT-Mamba performs excellently in reducing noise in LDCT images, enhancing detail preservation, and optimizing noise texture distribution, and exhibits higher statistical similarity with the radiomics features of NDCT images. The proposed CT-Mamba demonstrates outstanding performance in LDCT denoising and holds promise as a representative approach for applying the Mamba framework to LDCT denoising tasks.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"124 ","pages":"Article 102595"},"PeriodicalIF":5.4,"publicationDate":"2025-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144549343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leveraging multithreading on edge computing for smart healthcare based on intelligent multimodal classification approach 基于智能多模态分类方法,利用边缘计算上的多线程实现智能医疗保健
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-07-01 DOI: 10.1016/j.compmedimag.2025.102594
Faris S. Alghareb , Balqees Talal Hasan
{"title":"Leveraging multithreading on edge computing for smart healthcare based on intelligent multimodal classification approach","authors":"Faris S. Alghareb ,&nbsp;Balqees Talal Hasan","doi":"10.1016/j.compmedimag.2025.102594","DOIUrl":"10.1016/j.compmedimag.2025.102594","url":null,"abstract":"<div><div>Medical digitization has been intensively developed in the last decade, leading to paving the path for computer-aided medical diagnosis research. Thus, anomaly detection based on machine and deep learning techniques has been extensively employed in healthcare applications, such as medical imaging classification and monitoring of patients’ vital signs. To effectively leverage digitized medical records for identifying challenges in healthcare, this manuscript presents a smart Clinical Decision Support System (CDSS) dedicated for medical multimodal data automated diagnosis. A smart healthcare system necessitating medical data management and decision-making is proposed. To deliver timely rapid diagnosis, thread-level parallelism (TLP) is utilized for parallel distribution of classification tasks on three edge computing devices, each employing an AI module for on-device AI classifications. In comparison to existing machine and deep learning classification techniques, the proposed multithreaded architecture realizes a hybrid (ML and DL) processing module on each edge node. In this context, the presented edge computing-based parallel architecture captures a high level of parallelism, tailored for dealing with multiple categories of medical records. The cluster of the proposed architecture encompasses three edge computing Raspberry Pi devices and an edge server. Furthermore, lightweight neural networks, such as MobileNet, EfficientNet, and ResNet18, are trained and optimized based on genetic algorithms to provide classification of brain tumor, pneumonia, and colon cancer. Model deployment was conducted based on Python programming, where PyCharm is run on the edge server whereas Thonny is installed on edge nodes. In terms of accuracy, the proposed GA-based optimized ResNet18 for pneumonia diagnosis achieves 93.59% predictive accuracy and reduces the classifier computation complexity by 33.59%, whereas an outstanding accuracy of 99.78% and 100% were achieved with EfficientNet-v2 for brain tumor and colon cancer prediction, respectively, while both models preserving a reduction of 25% in the model’s classifier. More importantly, an inference speedup of 28.61% and 29.08% was obtained by implementing parallel 2 DL and 3 DL threads configurations compared to the sequential implementation, respectively. Thus, the proposed multimodal-multithreaded architecture offers promising prospects for comprehensive and accurate anomaly detection of patients’ medical imaging and vital signs. To summarize, our proposed architecture contributes to the advancement of healthcare services, aiming to improve patient medical diagnosis and therapy outcomes.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"124 ","pages":"Article 102594"},"PeriodicalIF":5.4,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144535588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PedSemiSeg: Pedagogy-inspired semi-supervised polyp segmentation PedSemiSeg:教育学启发的半监督息肉分割
IF 5.4 2区 医学
Computerized Medical Imaging and Graphics Pub Date : 2025-07-01 DOI: 10.1016/j.compmedimag.2025.102591
An Wang , Haoyu Ma , Long Bai , Yanan Wu , Mengya Xu , Yang Zhang , Mobarakol Islam , Hongliang Ren
{"title":"PedSemiSeg: Pedagogy-inspired semi-supervised polyp segmentation","authors":"An Wang ,&nbsp;Haoyu Ma ,&nbsp;Long Bai ,&nbsp;Yanan Wu ,&nbsp;Mengya Xu ,&nbsp;Yang Zhang ,&nbsp;Mobarakol Islam ,&nbsp;Hongliang Ren","doi":"10.1016/j.compmedimag.2025.102591","DOIUrl":"10.1016/j.compmedimag.2025.102591","url":null,"abstract":"<div><div>Recent advancements in deep learning techniques have contributed to developing improved polyp segmentation methods, thereby aiding in the diagnosis of colorectal cancer and facilitating automated surgery like endoscopic submucosal dissection (ESD). However, the scarcity of well-annotated data poses challenges by increasing the annotation burden and diminishing the performance of fully-supervised learning approaches. Additionally, distribution shifts due to variations among patients and medical centers require the model to generalize well during testing. To address these concerns, we present <strong>PedSemiSeg</strong>, a pedagogy-inspired semi-supervised learning framework designed to enhance polyp segmentation performance with limited labeled training data. In particular, we take inspiration from the pedagogy used in real-world educational settings, where teacher feedback and peer tutoring are both crucial in influencing the overall learning outcome. Expanding upon this concept, our approach involves supervising the outputs of the strongly augmented input (the students) using the pseudo and complementary labels crafted from the output of the weakly augmented input (the teacher) in both positive and negative learning manners. Additionally, we introduce reciprocal peer tutoring among the students, guided by respective prediction entropy. With these holistic learning processes, we aim to achieve consistent predictions for various versions of the same input and maximize the utilization of the abundant unlabeled data. Experimental results on two public datasets demonstrate the superiority of our method in polyp segmentation across various labeled data ratios. Furthermore, our approach exhibits excellent generalization capabilities on external unseen multi-center datasets, highlighting its broader clinical significance in practical applications during deployment.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"124 ","pages":"Article 102591"},"PeriodicalIF":5.4,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144522080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信