Medical image analysisPub Date : 2026-03-01Epub Date: 2025-12-04DOI: 10.1016/j.media.2025.103903
Benjamin Sinclair , William Pham , Lucy Vivash , Jasmine Moses , Miranda Lynch , Karina Dorfman , Cassandra Marotta , Shaun Koh , Jacob Bunyamin , Ella Rowsthorn , Alex Jarema , Himashi Peiris , Zhaolin Chen , Sandy R Shultz , David K Wright , Dexiao Kong , Sharon L. Naismith , Terence J. O’Brien , Meng Law
{"title":"Perivascular space identification nnUNet for generalised usage (PINGU)","authors":"Benjamin Sinclair , William Pham , Lucy Vivash , Jasmine Moses , Miranda Lynch , Karina Dorfman , Cassandra Marotta , Shaun Koh , Jacob Bunyamin , Ella Rowsthorn , Alex Jarema , Himashi Peiris , Zhaolin Chen , Sandy R Shultz , David K Wright , Dexiao Kong , Sharon L. Naismith , Terence J. O’Brien , Meng Law","doi":"10.1016/j.media.2025.103903","DOIUrl":"10.1016/j.media.2025.103903","url":null,"abstract":"<div><div>Perivascular spaces (PVSs) form a central component of the brain’s waste clearance system, the glymphatic system. These structures are visible on MRIs when enlarged, and their morphology is associated with aging and neurological disease. Manual quantification of PVS is time consuming and subjective. Numerous deep learning methods for PVS segmentation have been developed for automated segmentation. However, the majority of these algorithms have been developed and evaluated on homogenous datasets and high resolution scans, perhaps limiting their applicability for the wide range of image qualities acquired in clinical and research settings. In this work we train a nnUNet, a top-performing task driven biomedical image segmentation deep learning algorithm, on a heterogenous training sample of manually segmented MRIs of a range of different qualities and resolutions from 7 different datasets acquired on 6 different scanners. These are compared to the two currently publicly available deep learning methods for 3D segmentation of PVS, evaluated on scans with a range of resolutions and qualities. The resulting model, PINGU (Perivascular space Identification Nnunet for Generalised Usage), achieved voxel and cluster level dice scores of 0.50(SD=0.15) and 0.63(0.17) in the white matter (WM), and 0.54 (0.11) and 0.66(0.17) in the basal ganglia (BG). Performance on unseen “external” sites’ data was substantially lower for both PINGU (0.20-0.38 [WM, voxel], 0.29-0.58 [WM, cluster], 0.22-0.36 [BG, voxel], 0.46-0.60 [BG, cluster]) and the publicly available algorithms (0.18-0.30 [WM, voxel], 0.29-0.38 [WM cluster], 0.10-0.20 [BG, voxel], 0.15-0.37 [BG, cluster]). Nonetheless, PINGU strongly outperformed the publicly available algorithms, particularly in the BG. PINGU stands out as broad-use PVS segmentation tool, with particular strength in the BG, an area of PVS highly related to vascular disease and pathology.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"109 ","pages":"Article 103903"},"PeriodicalIF":11.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145689529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Medical image analysisPub Date : 2026-03-01Epub Date: 2025-12-29DOI: 10.1016/j.media.2025.103927
Sharon Peled , Yosef E. Maruvka , Moti Freiman
{"title":"Multi-cancer framework with cancer-aware attention and adversarial mutual-information minimization for whole slide image classification","authors":"Sharon Peled , Yosef E. Maruvka , Moti Freiman","doi":"10.1016/j.media.2025.103927","DOIUrl":"10.1016/j.media.2025.103927","url":null,"abstract":"<div><div>Whole Slide Images (WSIs) are crucial in modern pathology, offering high-resolution data for accurate diagnosis, treatment planning, and research. Deep learning methods have recently been proposed to harness this data by extracting and interpreting complex patterns. However, these approaches often focus on specific tumor types, limiting their generalizability across diverse pathological conditions and restricting scalability. This relatively narrow focus ultimately stems from the inherent heterogeneity in histopathology and the diverse morphological and molecular characteristics of different tumors. To this end, we propose a novel approach for multi-cancer WSI analysis, designed to leverage the diversity of different tumor types. We introduce a Cancer-Aware Attention module that models both shared patterns across cancers and cancer-specific variations to address heterogeneity and enhance cross-tumor generalization. Furthermore, we construct an adversarial cancer regularization mechanism to minimize cancer-specific biases through mutual information minimization. Additionally, we develop a hierarchical sample balancing strategy to mitigate data imbalances and promote unbiased learning. Together, these form a cohesive framework for unbiased multi-cancer WSI analysis. Extensive experiments on a uniquely constructed multi-cancer dataset demonstrate significant improvements in generalization, providing a scalable solution for WSI classification across diverse cancer types.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"109 ","pages":"Article 103927"},"PeriodicalIF":11.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145894002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Medical image analysisPub Date : 2026-03-01Epub Date: 2025-12-24DOI: 10.1016/j.media.2025.103924
Xiaoming Zhang , Chunli Li , Jiacheng Hao , Yuan Gao , Danyang Tu , Jianyi Qiao , Xiaoli Yin , Le Lu , Ling Zhang , Ke Yan , Yang Hou , Yu Shi
{"title":"Non-contrast CT esophageal varices grading through clinical prior-enhanced multi-organ analysis","authors":"Xiaoming Zhang , Chunli Li , Jiacheng Hao , Yuan Gao , Danyang Tu , Jianyi Qiao , Xiaoli Yin , Le Lu , Ling Zhang , Ke Yan , Yang Hou , Yu Shi","doi":"10.1016/j.media.2025.103924","DOIUrl":"10.1016/j.media.2025.103924","url":null,"abstract":"<div><div>Esophageal varices (EV) represent a critical complication of portal hypertension, affecting approximately 60% of cirrhosis patients with a significant bleeding risk of ∼ 30%. While traditionally diagnosed through invasive endoscopy, non-contrast computed tomography (NCCT) presents a potential non-invasive alternative that has yet to be fully utilized in clinical practice. We present Multi-Organ-COhesion Network++ (MOON++), a novel multimodal framework that enhances EV assessment through comprehensive analysis of NCCT scans. Inspired by clinical evidence correlating organ volumetric relationships with liver disease severity, MOON++ synthesizes imaging characteristics of the esophagus, liver, and spleen through multimodal learning. We evaluated our approach using 1631 patients, those with endoscopically confirmed EV were classified into four severity grades. Validation in 239 patient cases and independent testing in 289 cases demonstrate superior performance compared to conventional single organ methods, achieving an AUC of 0.894 versus 0.803 for the severe grade EV classification (G3 versus < G3) and 0.921 versus 0.793 for the differentiation of moderate to severe grades ( ≥ G2 versus < G2). We conducted a reader study involving experienced radiologists to further validate the performance of MOON++. To our knowledge, MOON++ represents the first comprehensive multi-organ NCCT analysis framework incorporating clinical knowledge priors for EV assessment, potentially offering a promising non-invasive diagnostic alternative. Code is available at <span><span>https://github.com/StevenHaojc/MOON</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"109 ","pages":"Article 103924"},"PeriodicalIF":11.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145822816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Medical image analysisPub Date : 2026-03-01Epub Date: 2025-12-14DOI: 10.1016/j.media.2025.103908
Yanan Zhang , Qingyang Liu , Qian Chen , Xiangzhi Bai
{"title":"CLIP-Guided Generative network for pathology nuclei image augmentation","authors":"Yanan Zhang , Qingyang Liu , Qian Chen , Xiangzhi Bai","doi":"10.1016/j.media.2025.103908","DOIUrl":"10.1016/j.media.2025.103908","url":null,"abstract":"<div><div>Nuclei segmentation and classification play a crucial role in the quantitative analysis of computational pathology (CPath). However, the challenge of creating a large volume of labeled pathology nuclei images due to annotation costs has significantly limited the performance of deep learning-based nuclei segmentation methods. Generative data augmentation offers a promising solution by substantially expanding the available training data without additional annotations. In medical image analysis, Generative Adversarial Networks (GANs) were effective for data augmentation, enhancing model performance by generating realistic synthetic data. However, these approaches lack scalability for multi-class data, as nuclei masks cannot provide sufficient information for diverse image generation. Recently, visual-language foundation models, pretrained on large-scale image-caption pairs, have demonstrated robust performance in pathological diagnostic tasks. In this study, we propose a CLIP-guided generative data augmentation method for nuclei segmentation and classification, leveraging the pretrained pathological CLIP text and image encoders in both the generator and discriminator. Specifically, we first create text descriptions by processing paired histopathology images and nuclei masks, which include information such as organ tissue type, cell count, and nuclei types. These paired text descriptions and nuclei masks are then fed into our multi-modal conditional image generator to guide the synthesis of realistic histopathology images. To ensure the quality of synthesized images, we utilize a high-resolution image discriminator and a CLIP image encoder-based discriminator, focusing on both local and global features of histopathology images. The synthetic histopathology images, paired with corresponding nuclei masks, are integrated into the real dataset to train the nuclei segmentation and classification model. Our experiments, conducted on diverse publicly available pathology nuclei datasets, including both qualitative and quantitative analysis, demonstrate the effectiveness of our proposed method. The experimental results of the nuclei segmentation and classification task underscore the advantages of our data augmentation approach. The code is available at <span><span>https://github.com/zhangyn1415/CGPN-GAN</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"109 ","pages":"Article 103908"},"PeriodicalIF":11.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145753375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Medical image analysisPub Date : 2026-03-01Epub Date: 2025-12-08DOI: 10.1016/j.media.2025.103895
Alexander Frotscher , Jaivardhan Kapoor , Thomas Wolfers , Christian F. Baumgartner
{"title":"Unsupervised anomaly detection in medical imaging using aggregated normative diffusion","authors":"Alexander Frotscher , Jaivardhan Kapoor , Thomas Wolfers , Christian F. Baumgartner","doi":"10.1016/j.media.2025.103895","DOIUrl":"10.1016/j.media.2025.103895","url":null,"abstract":"<div><div>Early detection of anomalies in medical images such as brain magnetic resonance imaging (MRI) is highly relevant for diagnosis and treatment of many medical conditions. Supervised machine learning methods are limited to a small number of pathologies where there is good availability of labeled data. In contrast, <em>unsupervised</em> anomaly detection (UAD) has the potential to identify a broader spectrum of anomalies by spotting deviations from normal patterns. Our research demonstrates that previous state-of-the-art UAD approaches do not generalise well to diverse types of anomalies in multi-modal MRI data. To overcome this, we introduce a new UAD method named Aggregated Normative Diffusion (<span>ANDi</span>). <span>ANDi</span> operates by aggregating differences between predicted denoising steps and ground truth backwards transitions in Denoising Diffusion Probabilistic Models (DDPMs) that have been trained on pyramidal Gaussian noise. We validate <span>ANDi</span> against four recent UAD baselines, and across three diverse brain MRI datasets. We show that <span>ANDi</span>, in some cases, substantially surpasses these baselines and shows increased robustness to varying types of anomalies. Particularly in detecting multiple sclerosis (MS) lesions, <span>ANDi</span> achieves improvements of up to 44 % (0.302 to 0.436 on Lubljana, +0.134) in terms of AUPRC.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"109 ","pages":"Article 103895"},"PeriodicalIF":11.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145705010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"DTG: Dual transformers-based generative adversarial networks for retinal 2D/3D OCT image classification","authors":"Badr Ait Hammou , Renaud Duval , Marie-Carole Boucher , Farida Cheriet","doi":"10.1016/j.media.2025.103915","DOIUrl":"10.1016/j.media.2025.103915","url":null,"abstract":"<div><div>The automated identification of retinal disorders is one of the most popular real-world computer vision applications related to ophthalmology. It has several advantages and can help ophthalmologists identify diseases more accurately. Technically, it represents a retinal data classification problem. With the recent advances in Artificial Intelligence (AI) technologies, Transformer-based architectures have become powerful models commonly used for solving a wide range of tasks such as image classification. In general, even though Transformers have demonstrated excellent performance compared to existing cutting-edge models, they are data-hungry architectures and still need to perform better in automated medical diagnosis applications.</div><div>In this paper, we propose a deep learning architecture named Dual Transformers-based Generative Adversarial Networks (DTG). It is designed for Optical Coherence Tomography (OCT) data classification. It adopts the Vision Transformer and Multiscale Vision Transformer to encode retinal 2D OCT images (i.e., B-scans) and 3D OCT images (i.e., OCT sequence of B-scans). Then, it employs a proposed Generative Adversarial Networks (GAN) architecture to infer high-quality semantic data representations. Next, it increases the training data by taking advantage of our proposed patient instance-based data augmentation technique. Finally, a weighted classifier analyzes the data and performs the retinal disease classification task. Extensive experiments are carried out on two real-world OCT datasets. The experimental results prove that our proposed approach DTG surpasses several competitors in terms of classification accuracy, precision, recall, f1-score, quadratic weighted kappa, AUC-PR, and AUC-ROC. In particular, it performs better than popular Convolutional Neural Networks and Transformers used for 2D image and 3D OCT image classification. Furthermore, it can improve the performance of several existing works for retinal data classification.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"109 ","pages":"Article 103915"},"PeriodicalIF":11.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145784488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Medical image analysisPub Date : 2026-03-01Epub Date: 2025-11-17DOI: 10.1016/j.media.2025.103880
Mingrui Ma , Furong Luo , Binlin Ma , Shuxian Liu , Xiaoyi Lv , Pan Huang
{"title":"A Multi-instance Learning Network with Prototype-instance Adversarial Contrastive for Cervix Pathology Grading","authors":"Mingrui Ma , Furong Luo , Binlin Ma , Shuxian Liu , Xiaoyi Lv , Pan Huang","doi":"10.1016/j.media.2025.103880","DOIUrl":"10.1016/j.media.2025.103880","url":null,"abstract":"<div><div>The pathological grading of cervical squamous cell carcinoma (CSCC) is a fundamental and important index in tumor diagnosis. Pathologists tend to focus on single differentiation areas during the grading process. Existing multi-instance learning (MIL) methods divide pathology images into regions, generating multiple differentiated instances (MDIs) that often exhibit ambiguous grading patterns. These ambiguities reduce the model’s ability to accurately represent CSCC pathological grading patterns. Motivated by these issues, we propose an end-to-end multi-instance learning network with prototype-instance adversarial contrastive learning, termed PacMIL, which incorporates three key ideas. First, we introduce an end-to-end multi-instance nonequilibrium learning algorithm that addresses the mismatch between MIL feature representations and CSCC pathological grading, and enables nonequilibrium representation. Second, we design a prototype-instance adversarial contrastive (PAC) approach that integrates a priori prototype instances and a probability distribution attention mechanism. This enhances the model’s ability to learn representations for single differentiated instances (SDIs). Third, we incorporate an adversarial contrastive learning strategy into the PAC method to overcome the limitation that fixed metrics rarely capture the variability of MDIs and SDIs. In addition, we embed the correct metric distances of the MDIs and SDIs into the optimization objective function to further guide representation learning. Extensive experiments demonstrate that our PacMIL model achieves 93.09% and 0.9802 for the mAcc and AUC metrics, respectively, outperforming other SOTA models. Moreover, the representation ability of PacMIL is superior to that of existing SOTA approaches. Overall, our model offers enhanced practicality in CSCC pathological grading. Our code and dataset will be publicly available at <span><span><em>https://github.com/Baron-Huang/PacMIL</em></span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"109 ","pages":"Article 103880"},"PeriodicalIF":11.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145535684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Medical image analysisPub Date : 2026-03-01Epub Date: 2025-11-29DOI: 10.1016/j.media.2025.103890
Veronika Spieker , Hannah Eichhorn , Wenqi Huang , Jonathan K. Stelter , Tabita Catalan , Rickmer F. Braren , Daniel Rueckert , Francisco Sahli Costabal , Kerstin Hammernik , Dimitrios C. Karampinos , Claudia Prieto , Julia A. Schnabel
{"title":"PISCO: Self-supervised k-space regularization for improved neural implicit k-space representations of dynamic MRI","authors":"Veronika Spieker , Hannah Eichhorn , Wenqi Huang , Jonathan K. Stelter , Tabita Catalan , Rickmer F. Braren , Daniel Rueckert , Francisco Sahli Costabal , Kerstin Hammernik , Dimitrios C. Karampinos , Claudia Prieto , Julia A. Schnabel","doi":"10.1016/j.media.2025.103890","DOIUrl":"10.1016/j.media.2025.103890","url":null,"abstract":"<div><div>Neural implicit k-space representations (NIK) have shown promising results for dynamic magnetic resonance imaging (MRI) at high temporal resolutions. Yet, reducing acquisition time, and thereby available training data, results in severe performance drops due to overfitting. To address this, we introduce a novel self-supervised k-space loss function <span><math><msub><mi>L</mi><mtext>PISCO</mtext></msub></math></span>, applicable for regularization of NIK-based reconstructions. The proposed loss function is based on the concept of parallel imaging-inspired self-consistency (PISCO), enforcing a consistent global k-space neighborhood relationship without requiring additional data. Quantitative and qualitative evaluations on static and dynamic MR reconstructions show that integrating PISCO significantly improves NIK representations, making it a competitive dynamic reconstruction method without constraining the temporal resolution. Particularly at high acceleration factors (R ≥ 50), NIK with PISCO can avoid temporal oversmoothing of state-of-the-art methods and achieves superior spatio-temporal reconstruction quality. Furthermore, an extensive analysis of the loss assumptions and stability shows PISCO’s potential as versatile self-supervised k-space loss function for further applications and architectures. Code is available at: <span><span>https://github.com/compai-lab/2025-pisco-spieker</span><svg><path></path></svg></span></div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"109 ","pages":"Article 103890"},"PeriodicalIF":11.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145619566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Medical image analysisPub Date : 2026-03-01Epub Date: 2026-01-20DOI: 10.1016/j.media.2026.103954
Jinwei Zhang , Lianrui Zuo , Blake E. Dewey , Samuel W. Remedios , Yihao Liu , Savannah P. Hays , Dzung L. Pham , Ellen M. Mowry , Scott D. Newsome , Peter A. Calabresi , Shiv Saidha , Aaron Carass , Jerry L. Prince
{"title":"UNISELF: A unified network with instance normalization and self-ensembled lesion fusion for multiple sclerosis lesion segmentation","authors":"Jinwei Zhang , Lianrui Zuo , Blake E. Dewey , Samuel W. Remedios , Yihao Liu , Savannah P. Hays , Dzung L. Pham , Ellen M. Mowry , Scott D. Newsome , Peter A. Calabresi , Shiv Saidha , Aaron Carass , Jerry L. Prince","doi":"10.1016/j.media.2026.103954","DOIUrl":"10.1016/j.media.2026.103954","url":null,"abstract":"<div><div>Automated segmentation of multiple sclerosis (MS) lesions using multicontrast magnetic resonance (MR) images improves efficiency and reproducibility compared to manual delineation, with deep learning (DL) methods achieving state-of-the-art performance. However, these DL-based methods have yet to simultaneously optimize in-domain accuracy and out-of-domain generalization when trained on a single source with limited data, or their performance has been unsatisfactory. To fill this gap, we propose a method called UNISELF, which achieves high accuracy within a single training domain while demonstrating strong generalizability across multiple out-of-domain test datasets. UNISELF employs a novel test-time self-ensembled lesion fusion to improve segmentation accuracy, and leverages test-time instance normalization (TTIN) of latent features to address domain shifts and missing input contrasts. Trained on the ISBI 2015 longitudinal MS segmentation challenge training dataset, UNISELF ranks among the best-performing methods on the challenge test dataset. Additionally, UNISELF outperforms all benchmark methods trained on the same ISBI training data across diverse out-of-domain test datasets with domain shifts and missing contrasts, including the public MICCAI 2016 and UMCL datasets, as well as a private multisite dataset. These test datasets exhibit domain shifts and/or missing contrasts caused by variations in acquisition protocols, scanner types, and imaging artifacts arising from imperfect acquisition. Our code is available at <span><span>https://github.com/Jinwei1209/UNISELF</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"109 ","pages":"Article 103954"},"PeriodicalIF":11.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146014238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Medical image analysisPub Date : 2026-03-01Epub Date: 2025-12-07DOI: 10.1016/j.media.2025.103905
Mikaël Simard , Zhuoyan Shen , Konstantin Bräutigam , Rasha Abu-Eid , Maria A. Hawkins , Charles-Antoine Collins Fekete
{"title":"Immunocto: A massive immune cell database auto-generated for histopathology","authors":"Mikaël Simard , Zhuoyan Shen , Konstantin Bräutigam , Rasha Abu-Eid , Maria A. Hawkins , Charles-Antoine Collins Fekete","doi":"10.1016/j.media.2025.103905","DOIUrl":"10.1016/j.media.2025.103905","url":null,"abstract":"<div><div>With the advent of novel cancer treatment options such as immunotherapy, studying the tumour immune micro-environment (TIME) is crucial to inform on prognosis and understand potential response to therapeutic agents. A key approach to characterising the TIME involves combining digitised images of haematoxylin and eosin (H&E) stained tissue sections obtained in routine histopathology examination with automated immune cell detection and classification methods. In this work, we introduce a workflow to automatically generate robust single cell contours and labels from dually stained tissue sections with H&E and multiplexed immunofluorescence (IF) markers. The approach harnesses the Segment Anything Model and requires minimal human intervention compared to existing single cell databases. With this methodology, we create Immunocto, a massive, multi-million automatically generated database of 6,848,454 human cells and objects, including 2,282,818 immune cells distributed across 4 subtypes: CD4<span><math><msup><mrow></mrow><mo>+</mo></msup></math></span> T cell lymphocytes, CD8<span><math><msup><mrow></mrow><mo>+</mo></msup></math></span> T cell lymphocytes, CD20<span><math><msup><mrow></mrow><mo>+</mo></msup></math></span> B cell lymphocytes, and CD68<span><math><msup><mrow></mrow><mo>+</mo></msup></math></span>/CD163<span><math><msup><mrow></mrow><mo>+</mo></msup></math></span> macrophages. For each cell, we provide a 64 × 64 pixels<sup>2</sup> H&E image at <strong>40</strong> × magnification, along with a binary mask of the nucleus and a label. The database, which is made publicly available, can be used to train models to study the TIME on routine H&E slides. We show that deep learning models trained on Immunocto result in state-of-the-art performance for lymphocyte detection. The approach demonstrates the benefits of using matched H&E and IF data to generate robust databases for computational pathology applications.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"109 ","pages":"Article 103905"},"PeriodicalIF":11.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145705011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}