Artificial Intelligence in Medicine最新文献

筛选
英文 中文
EEG-based epileptic seizure prediction with patient-tailored spectral–spatial–temporal feature learning 基于脑电图的癫痫发作预测与患者定制的频谱-时空特征学习。
IF 6.2 2区 医学
Artificial Intelligence in Medicine Pub Date : 2026-04-01 Epub Date: 2026-01-28 DOI: 10.1016/j.artmed.2026.103371
Woohyeok Choi , Jun-Mo Kim , Hyeonyeong Nam, Soyeon Bak, Dong-Hee Shin, Tae-Eui Kam
{"title":"EEG-based epileptic seizure prediction with patient-tailored spectral–spatial–temporal feature learning","authors":"Woohyeok Choi ,&nbsp;Jun-Mo Kim ,&nbsp;Hyeonyeong Nam,&nbsp;Soyeon Bak,&nbsp;Dong-Hee Shin,&nbsp;Tae-Eui Kam","doi":"10.1016/j.artmed.2026.103371","DOIUrl":"10.1016/j.artmed.2026.103371","url":null,"abstract":"<div><div>Epilepsy is a chronic brain disorder characterized by recurrent seizures resulting from abnormal brain cell activity. The unpredictability of these seizures underscores the criticality of anticipating and promptly addressing them to enhance the patient’s overall quality of life. Electroencephalography (EEG) is a frequently employed technique for seizure prediction, leveraging its economic viability and high temporal resolution. However, the complexity of EEG signals has driven interest in machine learning and deep learning for automated seizure prediction systems. Nevertheless, conventional approaches that employ predefined methodologies for analyzing seizures may not adequately account for the variability in spectral and spatial characteristics among patients. To address these limitations and present a more effective and interpretable approach, we introduce the patient-tailored seizure prediction network (PSP-Net) for adaptive spectral–spatial–temporal EEG feature representation learning. PSP-Net combines patient-tailored bandpass filters, a patient-tailored spatial coupling matrix, and an attentive temporal convolution network-based feature extractor in a unified framework to automatically extract patient-specific spectral–spatial–temporal features from EEG data. The proposed method achieves state-of-the-art performance on multiple publicly available seizure datasets, which highlights its potential as a reliable tool for personalized clinical applications.</div></div>","PeriodicalId":55458,"journal":{"name":"Artificial Intelligence in Medicine","volume":"174 ","pages":"Article 103371"},"PeriodicalIF":6.2,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146115198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards more efficient and better multi-view and multi-modal retinopathy assisted diagnosis 迈向更有效及更佳的多视点及多模式视网膜病变辅助诊断。
IF 6.2 2区 医学
Artificial Intelligence in Medicine Pub Date : 2026-04-01 Epub Date: 2026-02-04 DOI: 10.1016/j.artmed.2026.103376
Yonghao Huang , Chuan Zhou , Leiting Chen
{"title":"Towards more efficient and better multi-view and multi-modal retinopathy assisted diagnosis","authors":"Yonghao Huang ,&nbsp;Chuan Zhou ,&nbsp;Leiting Chen","doi":"10.1016/j.artmed.2026.103376","DOIUrl":"10.1016/j.artmed.2026.103376","url":null,"abstract":"<div><div>Fundus images are widely used in early retinopathy examination to prevent visual impairment caused by retinopathy. The retinopathy examination process based on fundus images can be mainly summarized in three steps: (1) ophthalmologists obtain comprehensive fundus information by jointly analyzing multi-view fundus images; (2) ophthalmologists obtain complementary lesion information by contrastingly analyzing multi-modal fundus images; (3) ophthalmologists diagnose retinopathy categories and write specialized fundus reports. To simulate the clinical fundus image examination process, we introduce an efficient multi-view and multi-modal fundus image joint ancillary diagnosis framework that can simultaneously accept fundus images of different views and modalities for pathology classification and symptom report generation tasks. In our framework, we propose jointly employing self-attention in intra-view local and inter-view sparse global windows to extract comprehensive fundus information among different views. We propose a multi-modal fusion transformer via shunted multi-scale cross-attention to model lesions of various scales by splitting attention granularity at query and queried modalities to fuse complementary lesion information among different modalities. The experimental results of retinopathy classification and report generation tasks indicate that our proposed method is superior to other benchmarking methods, achieving a classification accuracy of 83.96% and a report generation CIDEr of 0.934.</div></div>","PeriodicalId":55458,"journal":{"name":"Artificial Intelligence in Medicine","volume":"174 ","pages":"Article 103376"},"PeriodicalIF":6.2,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146144763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rethinking U-Net architecture in medical imaging: Advancing the efficient and interpretable UKAN-CBAM framework for colorectal polyp segmentation 重新思考医学成像中的U-Net架构:推进结肠直肠息肉分割的高效和可解释的UKAN-CBAM框架。
IF 6.2 2区 医学
Artificial Intelligence in Medicine Pub Date : 2026-04-01 Epub Date: 2026-01-15 DOI: 10.1016/j.artmed.2026.103352
Md. Faysal Ahamed , Fariya Bintay Shafi , Md. Rabiul Islam , Md. Fahmidun Nabi , Julfikar Haider
{"title":"Rethinking U-Net architecture in medical imaging: Advancing the efficient and interpretable UKAN-CBAM framework for colorectal polyp segmentation","authors":"Md. Faysal Ahamed ,&nbsp;Fariya Bintay Shafi ,&nbsp;Md. Rabiul Islam ,&nbsp;Md. Fahmidun Nabi ,&nbsp;Julfikar Haider","doi":"10.1016/j.artmed.2026.103352","DOIUrl":"10.1016/j.artmed.2026.103352","url":null,"abstract":"<div><div>Prompt detection of colorectal polyps is essential for preventing colorectal cancer, a leading cause of cancer-related deaths worldwide. However, manual detection through medical imaging faces significant challenges, including high costs, reliance on skilled endoscopists, and susceptibility to errors, which can result in missed diagnoses and adverse health outcomes. This study proposes UKAN-CBAM, an advanced semantic segmentation framework that combines Kolmogorov-Arnold Networks (KANs) with Convolutional Block Attention Modules (CBAM) within a U-Net architecture. This two-phase encoder-decoder design integrates convolutional and tokenized KAN blocks to leverage the efficiency of KANs and the feature refinement capabilities of CBAM, achieving superior segmentation performance with enhanced interpretability and compactness. The framework was trained on the Kvasir-SEG dataset and validated across external datasets, including CVC-ClinicDB, CVC-ColonDB, EndoScene, PolypGen, ETIS-LaribPolypDB, and Piccolo. In addition, 10-fold cross-validation was performed to ensure robustness and generalization. UKAN-CBAM outperformed state-of-the-art (SOTA) methods, achieving an mDice of 93.80%, an mIoU of 89.18%, a precision of 95.65%, a recall of 92.02%, and an accuracy of 96.21%. It also demonstrated computational efficiency, requiring only 55.99 MB of memory and 5.214 GFLOPs, and achieved inference speeds of 122.272 ms per prediction. The feature maps, heatmaps, and Grad-CAM showed that the model focuses on key regions, whereas the ablations highlight the importance of configuration for robustness. Paired <em>t-</em>tests with <em>P</em> values, confidence intervals, and standard deviations, along with 10-fold cross-validation, further confirmed that the reported improvements were statistically significant and not due to chance. Strong generalization across diverse image and video datasets and real-time capabilities provide an effective and reliable tool for clinical applications. This integration of attention mechanisms and interpretability represents a significant step forward in medical diagnostics. Code availability: <span><span>https://github.com/Faysal425/UKAN_CBAM_Segmentation</span><svg><path></path></svg></span></div></div>","PeriodicalId":55458,"journal":{"name":"Artificial Intelligence in Medicine","volume":"174 ","pages":"Article 103352"},"PeriodicalIF":6.2,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146013422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BRLA-DDI: A novel framework for drug–drug interaction extraction BRLA-DDI:药物-药物相互作用提取的新框架
IF 6.2 2区 医学
Artificial Intelligence in Medicine Pub Date : 2026-04-01 Epub Date: 2026-01-21 DOI: 10.1016/j.artmed.2026.103353
Zhu Yuan , Shuailiang Zhang , Zongjin Li , Huiyun Zhang , Huaqi Zhang , Yaxun Jia
{"title":"BRLA-DDI: A novel framework for drug–drug interaction extraction","authors":"Zhu Yuan ,&nbsp;Shuailiang Zhang ,&nbsp;Zongjin Li ,&nbsp;Huiyun Zhang ,&nbsp;Huaqi Zhang ,&nbsp;Yaxun Jia","doi":"10.1016/j.artmed.2026.103353","DOIUrl":"10.1016/j.artmed.2026.103353","url":null,"abstract":"<div><div>Drug–drug interaction (DDI) extraction is a pivotal task in biomedical information processing, focused on identifying potentially adverse drug reactions (ADRs). Despite significant progress in DDI extraction, existing models struggle with complex sentence structures and ambiguous interactions, especially in cases involving rare or implicit drug relationships. To overcome these limitations, this paper presents a novel model, BRLA-DDI, that integrates BioBERT-LSTM mechanism, Relational Graph Convolutional Network (R-GCN), and a loss function incorporating attention (loss+attention) to enhance both accuracy and generalization in DDI tasks. The core innovation of BRLA-DDI lies in its synergistic integration of these components, coupled with two unique methodological contributions. First, the model employs BioBERT and BiLSTM for text feature extraction, effectively leveraging the contextual information within drug descriptions. Second, by thoroughly integrating the multihead attention mechanism with R-GCN, BRLA-DDI strengthens its capability to capture intricate relationships between drug entities. Additionally, we introduce an innovative loss-attention mechanism that merges cross-entropy loss with an attention-based regularization term, offering precise guidance for the model in learning key features during the optimization process. Lastly, we employ a dynamic negative sampling strategy that mitigates the zero-loss issue prevalent in traditional methods, thereby accelerating model convergence and enhancing robustness. Experimental results demonstrate the superiority of the proposed BRLA-DDI approach, achieving a precision of 87.68%, a recall of 88.06%, and an F1 Score of 87.87% on the DDI Extraction 2013 dataset, surpassing a wide range of existing methods. Crucially, the model also exhibits robust and superior performance on the external TAC 2018 dataset, providing compelling evidence of its strong generalizability across different data sources and annotation styles. All our code and data have been publicly released at <span><span>https://github.com/Hero-Legend/LossAtt-DDI</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":55458,"journal":{"name":"Artificial Intelligence in Medicine","volume":"174 ","pages":"Article 103353"},"PeriodicalIF":6.2,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Smiling difficulties in Alzheimer’s disease linked to reduced nucleus accumbens and pallidum brain volume: Deep learning insights 阿尔茨海默病中的微笑困难与伏隔核和苍白球脑容量减少有关:深度学习见解
IF 6.2 2区 医学
Artificial Intelligence in Medicine Pub Date : 2026-03-01 Epub Date: 2025-12-30 DOI: 10.1016/j.artmed.2025.103347
Tomomichi Iizuka , Yumi Umeda-Kameyama , Makoto Fukasawa , Masahiro Akishita , Masashi Kameyama
{"title":"Smiling difficulties in Alzheimer’s disease linked to reduced nucleus accumbens and pallidum brain volume: Deep learning insights","authors":"Tomomichi Iizuka ,&nbsp;Yumi Umeda-Kameyama ,&nbsp;Makoto Fukasawa ,&nbsp;Masahiro Akishita ,&nbsp;Masashi Kameyama","doi":"10.1016/j.artmed.2025.103347","DOIUrl":"10.1016/j.artmed.2025.103347","url":null,"abstract":"<div><div>Patients tend to lose the ability to smile during the course of dementia. However, such impairments have rarely been reported, likely due to challenges in quantifying facial expressions. However, feature extraction is now automated due to recent developments in deep learning, which is a machine learning method used in artificial intelligence (AI). We used the output of image-classification AI to quantify smiles in participants with Alzheimer’s disease (AD) and with normal cognition (NC). We found that the ability to form a smile upon request is impaired in patients with AD and that it is associated with reduced volumes of the nucleus accumbens and pallidum. Furthermore, smiling faces were classified with higher accuracy than neutral faces in discriminating between AD and NC. A score from neutral face showed significant correlation with cognitive function. These findings generate hypotheses regarding the neural mechanisms underlying impaired facial expressions in dementia.</div></div>","PeriodicalId":55458,"journal":{"name":"Artificial Intelligence in Medicine","volume":"173 ","pages":"Article 103347"},"PeriodicalIF":6.2,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145928267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UniStain: A unified and organ-aware virtual H&E staining framework for label-free autofluorescence images UniStain:一个统一的和器官感知的虚拟H&E染色框架,用于无标签的自体荧光图像
IF 6.2 2区 医学
Artificial Intelligence in Medicine Pub Date : 2026-03-01 Epub Date: 2025-12-30 DOI: 10.1016/j.artmed.2025.103335
Lulin Shi , Xingzhong Hou , James K.W. Lai , Ivy H.M. Wong , Bingxin Huang , Athena L.Y. Hui , Ronald C.K. Chan , Terence T.W. Wong
{"title":"UniStain: A unified and organ-aware virtual H&E staining framework for label-free autofluorescence images","authors":"Lulin Shi ,&nbsp;Xingzhong Hou ,&nbsp;James K.W. Lai ,&nbsp;Ivy H.M. Wong ,&nbsp;Bingxin Huang ,&nbsp;Athena L.Y. Hui ,&nbsp;Ronald C.K. Chan ,&nbsp;Terence T.W. Wong","doi":"10.1016/j.artmed.2025.103335","DOIUrl":"10.1016/j.artmed.2025.103335","url":null,"abstract":"<div><div>While hematoxylin and eosin (H&amp;E) staining remains the gold standard for pathological diagnosis, its chemical-dependent workflow presents significant limitations, such as time-consuming protocols, hazardous reagent disposal and batch-to-batch variability in stain quality. We present UniStain, a breakthrough virtual staining framework that leverages label-free autofluorescence (AF) imaging and prompt-based deep learning to overcome these challenges. Unlike existing single-organ approaches that require multiple specialized models, our architecture enables versatile multi-tissue staining through a single model, significantly reducing computational overhead. The proposed crosspatch self-attention guidance (CPSG) mechanism addresses critical whole-slide image challenges by maintaining style consistency across adjacent patches and eliminating stitching artifacts. To support comprehensive evaluation, we curate and release the first multi-organ AF/H&amp;E dataset with human tissue samples. Additionally, we introduce downstream clinical validation tasks including image retrieval and cancer subtyping analysis, thereby establishing a robust evaluation framework for virtual staining models. Quantitative assessments (image quality metrics, visual Turing tests) and downstream analyses demonstrate UniStain’s superior performance compared to existing image translation methods, achieving state-of-the-art results while eliminating chemical staining requirements. The dataset and code of UniStain can be found at <span><span>https://github.com/TABLAB-HKUST/UniStain</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":55458,"journal":{"name":"Artificial Intelligence in Medicine","volume":"173 ","pages":"Article 103335"},"PeriodicalIF":6.2,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145852435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tackling data scarcity: Synthetic tumour and mask generation to improve image segmentation 解决数据短缺:合成肿瘤和掩码生成以改善图像分割。
IF 6.2 2区 医学
Artificial Intelligence in Medicine Pub Date : 2026-03-01 Epub Date: 2025-12-31 DOI: 10.1016/j.artmed.2025.103348
Félix Quinton , Benoit Presles , Romain Popoff , François Godard , Olivier Chevallier , Julie Pellegrinelli , Jean-Marc Vrigneaud , Jean-Louis Alberini , Fabrice Meriaudeau
{"title":"Tackling data scarcity: Synthetic tumour and mask generation to improve image segmentation","authors":"Félix Quinton ,&nbsp;Benoit Presles ,&nbsp;Romain Popoff ,&nbsp;François Godard ,&nbsp;Olivier Chevallier ,&nbsp;Julie Pellegrinelli ,&nbsp;Jean-Marc Vrigneaud ,&nbsp;Jean-Louis Alberini ,&nbsp;Fabrice Meriaudeau","doi":"10.1016/j.artmed.2025.103348","DOIUrl":"10.1016/j.artmed.2025.103348","url":null,"abstract":"<div><div>Given the increasing data requirements of deep learning models and the scarcity of medical imaging data, new data augmentation techniques are receiving particular attention. This paper explores the subfield of tumour synthesis within medical image generation, focusing on the development of synthetic tumours in MR images. This study introduces a novel tumour generation method using diffusion models, designed to inpaint visually convincing 3D synthetic liver tumours into real MRI volumes while generating the corresponding masks using simplex deformation. This approach has been employed successfully to inpaint images with <span><math><mrow><mn>1</mn><mspace></mspace><mn>000</mn></mrow></math></span> synthetic tumours. Furthermore, it has shown significant performance improvements when applied in image segmentation tasks. In particular, our method improved the Dice coefficient by 6.7 points on the ATLAS test set without relying on external data. When combined with a pseudo-annotated external dataset, the improvement increased to 10 points. This study not only demonstrates the ability to segment tumours but also paves the way for various synthetic data-based applications in medical imaging.</div></div>","PeriodicalId":55458,"journal":{"name":"Artificial Intelligence in Medicine","volume":"173 ","pages":"Article 103348"},"PeriodicalIF":6.2,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145914200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning Health Systems provide a glide path to safe landing for AI in health 学习型卫生系统为人工智能在卫生领域的安全着陆提供了一条下坡路
IF 6.2 2区 医学
Artificial Intelligence in Medicine Pub Date : 2026-03-01 Epub Date: 2025-12-31 DOI: 10.1016/j.artmed.2025.103346
Vasa Curcin , Brendan Delaney , Ahmad Alkhatib , Neil Cockburn , Olivia Dann , Olga Kostopoulou , Daniel Leightley , Matthew Maddocks , Sanjay Modgil , Krishnarajah Nirantharakumar , Philip Scott , Ingrid Wolfe , Kelly Zhang , Charles Friedman
{"title":"Learning Health Systems provide a glide path to safe landing for AI in health","authors":"Vasa Curcin ,&nbsp;Brendan Delaney ,&nbsp;Ahmad Alkhatib ,&nbsp;Neil Cockburn ,&nbsp;Olivia Dann ,&nbsp;Olga Kostopoulou ,&nbsp;Daniel Leightley ,&nbsp;Matthew Maddocks ,&nbsp;Sanjay Modgil ,&nbsp;Krishnarajah Nirantharakumar ,&nbsp;Philip Scott ,&nbsp;Ingrid Wolfe ,&nbsp;Kelly Zhang ,&nbsp;Charles Friedman","doi":"10.1016/j.artmed.2025.103346","DOIUrl":"10.1016/j.artmed.2025.103346","url":null,"abstract":"<div><div>Artificial Intelligence (AI) holds significant promise for healthcare but often struggles to transition from development to clinical integration. This paper argues that Learning Health Systems (LHS)—socio-technical ecosystems designed for continuous data-driven improvement—provide a potential “glide path” for safe, sustainable AI deployment. Just as modern aviation depends on instrument landing systems, the safe and effective integration of AI into healthcare requires the socio-technical infrastructure of LHSs, that enable iterative development and monitoring of AI tools, integrating clinical, technical, and ethical considerations through stakeholder collaboration. They address key challenges in AI implementation, including model generalizability, workflow integration, and transparency, by embedding co-creation, real-world evaluation, and continuous learning into care processes. Unlike static deployments, LHSs support the dynamic evolution of AI systems, incorporating feedback and recalibration to mitigate performance drift and bias. Moreover, they embed governance and regulatory functions—clarifying accountability, supporting data and model provenance, and upholding FAIR (Findable, Accessible, Interoperable, Reusable) principles. LHSs also promote “human-in-the-loop” safety through structured studies of human-AI interaction and shared decision-making. The paper outlines practical steps to align AI with LHS frameworks, including investment in data infrastructure, continuous model monitoring, and fostering a learning culture. Embedding AI in LHSs transforms implementation from a one-time event into a sustained, evidence-based learning process that aligns innovation with clinical realities, ultimately advancing patient care, health equity, and system resilience. The arguments build on insights from an international workshop hosted in 2025, offering a strategic vision for the future of AI in healthcare.</div></div>","PeriodicalId":55458,"journal":{"name":"Artificial Intelligence in Medicine","volume":"173 ","pages":"Article 103346"},"PeriodicalIF":6.2,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145928171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Siamese evolutionary masking: Enhancing the generalization of self-supervised medical image segmentation model 暹罗进化掩蔽:增强自监督医学图像分割模型的泛化。
IF 6.2 2区 医学
Artificial Intelligence in Medicine Pub Date : 2026-03-01 Epub Date: 2026-01-07 DOI: 10.1016/j.artmed.2026.103349
Yichen Zhi, Hongxia Bie, Jiali Wang, Zhao Jing
{"title":"Siamese evolutionary masking: Enhancing the generalization of self-supervised medical image segmentation model","authors":"Yichen Zhi,&nbsp;Hongxia Bie,&nbsp;Jiali Wang,&nbsp;Zhao Jing","doi":"10.1016/j.artmed.2026.103349","DOIUrl":"10.1016/j.artmed.2026.103349","url":null,"abstract":"<div><div>Self-supervised learning autonomously extracts features from unlabeled data, supporting downstream segmentation tasks with limited annotations. However, variations in devices, imaging parameters, and other factors lead to differences in the distribution of medical images, resulting in poor model generalizability. Mainstream frameworks include Instance Discrimination, which learns features from different perspectives of the same image but may miss details, and Masked Image Modeling (MIM), which captures local features by predicting masked areas but lacks global information capture. To enhance generalizability by combining global and local information, We introduce the Siamese Evolutionary Masking (SEM) framework, which employs a Siamese architecture composed of an online branch and a target branch. An evolutionary masking strategy is adopted within the online branch, transitioning from grid to block masking during training, encouraging the model to develop more general visual features. Additionally, a module called Switch Decoder aligns the online branch’s predicted features with the true features in the target branch, overcoming the challenge of balancing global and local information capture. Experiments on six public datasets, including four skin datasets (SD-260, ISIC2019, ISIC2017, and PH<span><math><msup><mrow></mrow><mrow><mn>2</mn></mrow></msup></math></span>) and two chest X-ray datasets (Chest X-ray PD and Chest X-ray), demonstrate that SEM achieves strong performance among self-supervised methods. In cross-dataset experiments with different distributions, SEM demonstrated the best segmentation and generalization performance, with Dice scores of 81.8% and 91.1%, Jaccard indices of 72.2% and 84.4%, and optimal HD95% measurements of 13.1% and 10.5%, respectively. Code is available at <span><span>https://github.com/wsdl666/Siamese-Evolutionary-Masking</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":55458,"journal":{"name":"Artificial Intelligence in Medicine","volume":"173 ","pages":"Article 103349"},"PeriodicalIF":6.2,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145967942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integrating probabilistic trees and causal networks for clinical and epidemiological data 整合临床和流行病学数据的概率树和因果网络
IF 6.2 2区 医学
Artificial Intelligence in Medicine Pub Date : 2026-03-01 Epub Date: 2026-01-06 DOI: 10.1016/j.artmed.2026.103350
Sheresh Zahoor , Pietro Liò , Gaël Dias , Mohammed Hasanuzzaman
{"title":"Integrating probabilistic trees and causal networks for clinical and epidemiological data","authors":"Sheresh Zahoor ,&nbsp;Pietro Liò ,&nbsp;Gaël Dias ,&nbsp;Mohammed Hasanuzzaman","doi":"10.1016/j.artmed.2026.103350","DOIUrl":"10.1016/j.artmed.2026.103350","url":null,"abstract":"<div><div>Healthcare decision-making requires not only accurate predictions but also insights into how factors influence patient outcomes. While traditional machine learning (ML) models excel at predicting outcomes, such as identifying high-risk patients, they are limited in addressing “what if” questions about interventions. This study introduces the Probabilistic Causal Fusion (PCF) framework, which integrates Causal Bayesian Networks (CBNs) and Probability Trees (PTrees) to extend beyond predictions. PCF leverages causal relationships from CBNs to structure PTrees, enabling both the quantification of factor impacts and the simulation of hypothetical interventions. The framework is evaluated on three clinically diverse, real-world datasets, MIMIC-IV, Framingham Heart Study, and BRFSS (Diabetes), demonstrating consistent predictive performance comparable to conventional ML models, while offering enhanced interpretability and causal reasoning capabilities. In contrast to conventional approaches focused solely on prediction, PCF offers a unified framework for prediction, intervention modelling, and counterfactual analysis, forming a holistic toolkit for clinical decision support. To enhance interpretability, PCF incorporates sensitivity analysis and SHapley Additive exPlanations (SHAP). Sensitivity analysis quantifies the influence of causal parameters on outcomes such as Length of Stay (LOS), Coronary Heart Disease (CHD), and Diabetes, while SHAP highlights the importance of individual features in predictive modelling. This dual-layered interpretability offers both macro-level insights into causal pathways and micro-level explanations for individual predictions. By combining causal reasoning with predictive modelling, PCF bridges the gap between clinical intuition and data-driven insights. Its ability to uncover relationships between modifiable factors and simulate hypothetical scenarios provides clinicians with a clearer understanding of causal pathways. This approach supports more informed, evidence-based decision-making, offering a robust framework for addressing complex questions in diverse healthcare settings.</div></div>","PeriodicalId":55458,"journal":{"name":"Artificial Intelligence in Medicine","volume":"173 ","pages":"Article 103350"},"PeriodicalIF":6.2,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145928173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书