Francesco Berloco, Gian Maria Zaccaria, Nicola Altini, Simona Colucci, Vitoantonio Bevilacqua
{"title":"A multimodal framework for assessing the link between pathomics, transcriptomics, and pancreatic cancer mutations","authors":"Francesco Berloco, Gian Maria Zaccaria, Nicola Altini, Simona Colucci, Vitoantonio Bevilacqua","doi":"10.1016/j.compmedimag.2025.102526","DOIUrl":"10.1016/j.compmedimag.2025.102526","url":null,"abstract":"<div><div>In Pancreatic Ductal Adenocarcinoma (PDAC), predicting genetic mutations directly from histopathological images using Deep Learning can provide valuable insights. The combination of several omics can provide further knowledge on mechanisms underlying tumor biology. This study aimed at developing an explainable multimodal pipeline to predict genetic mutations for the <em>KRAS</em>, <em>TP53</em>, <em>SMAD4</em>, and <em>CDKN2A</em> genes, integrating pathomic features with transcriptomics from two independent datasets, the TCGA-PAAD, assumed as training set, and the CPTAC-PDA, as external validation set. Large and small configurations of CLAM (Clustering-constrained Attention Multiple Instance Learning) models were evaluated with three different feature extractors (ResNet50, UNI, and CONCH). RNA-seq data were pre-processed both conventionally and using three autoencoder architectures. The processed transcript panels were input into machine learning (ML) models for mutation classification. Attention maps and SHAP were employed, highlighting significant features from both data modalities. A fusion layer or a voting mechanism combined the outputs from pathomic and transcriptomic models, obtaining a multimodal prediction. Performance comparisons were assessed by Area Under Receiver Operating Characteristic (AUROC) and Precision-Recall (AUPRC) curves. On the validation set, for <em>KRAS</em>, multimodal ML achieved 0.92 of AUROC and 0.98 of AUPRC. For <em>TP53</em>, the multimodal voting model achieved 0.75 of AUROC and 0.85 of AUPRC. For <em>SMAD4</em> and <em>CDKN2A</em>, transcriptomic ML models achieved AUROC of 0.71 and 0.65, while multimodal ML showed AUPRC of 0.39 and 0.37, respectively. This approach demonstrated the potential of combining pathomics with transcriptomics, offering an interpretable framework for predicting key genetic mutations in PDAC.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"123 ","pages":"Article 102526"},"PeriodicalIF":5.4,"publicationDate":"2025-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143644207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jialu Li , Lei Zhu , Guibao Shen , Baoliang Zhao , Ying Hu , Hai Zhang , Weiming Wang , Qiong Wang
{"title":"Liver lesion segmentation in ultrasound: A benchmark and a baseline network","authors":"Jialu Li , Lei Zhu , Guibao Shen , Baoliang Zhao , Ying Hu , Hai Zhang , Weiming Wang , Qiong Wang","doi":"10.1016/j.compmedimag.2025.102523","DOIUrl":"10.1016/j.compmedimag.2025.102523","url":null,"abstract":"<div><div>Accurate liver lesion segmentation in ultrasound is a challenging task due to high speckle noise, ambiguous lesion boundaries, and inhomogeneous intensity distribution inside the lesion regions. This work first collected and annotated a dataset for liver lesion segmentation in ultrasound. In this paper, we propose a novel convolutional neural network to learn dual self-attentive transformer features for boosting liver lesion segmentation by leveraging the complementary information among non-local features encoded at different layers of the transformer architecture. To do so, we devise a dual self-attention refinement (DSR) module to synergistically utilize self-attention and reverse self-attention mechanisms to extract complementary lesion characteristics between cascaded multi-layer feature maps, assisting the model to produce more accurate segmentation results. Moreover, we propose a False-Positive–Negative loss to enable our network to further suppress the non-liver-lesion noise at shallow transformer layers and enhance more target liver lesion details into CNN features at deep transformer layers. Experimental results show that our network outperforms state-of-the-art methods quantitatively and qualitatively.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"123 ","pages":"Article 102523"},"PeriodicalIF":5.4,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143644208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yiqiu Qi , Lijun Wei , Jinzhu Yang , Jiachen Xu , Hongfei Wang , Qi Yu , Guoguang Shen , Yubo Cao
{"title":"CQENet: A segmentation model for nasopharyngeal carcinoma based on confidence quantitative evaluation","authors":"Yiqiu Qi , Lijun Wei , Jinzhu Yang , Jiachen Xu , Hongfei Wang , Qi Yu , Guoguang Shen , Yubo Cao","doi":"10.1016/j.compmedimag.2025.102525","DOIUrl":"10.1016/j.compmedimag.2025.102525","url":null,"abstract":"<div><div>Accurate segmentation of the tumor regions of nasopharyngeal carcinoma (NPC) is of significant importance for radiotherapy of NPC. However, the precision of existing automatic segmentation methods for NPC remains inadequate, primarily manifested in the difficulty of tumor localization and the challenges in delineating blurred boundaries. Additionally, the black-box nature of deep learning models leads to insufficient quantification of the confidence in the results, preventing users from directly understanding the model’s confidence in its predictions, which severely impacts the clinical application of deep learning models. This paper proposes an automatic segmentation model for NPC based on confidence quantitative evaluation (CQENet). To address the issue of insufficient confidence quantification in NPC segmentation results, we introduce a confidence assessment module (CAM) that enables the model to output not only the segmentation results but also the confidence in those results, aiding users in understanding the uncertainty risks associated with model outputs. To address the difficulty in localizing the position and extent of tumors, we propose a tumor feature adjustment module (FAM) for precise tumor localization and extent determination. To address the challenge of delineating blurred tumor boundaries, we introduce a variance attention mechanism (VAM) to assist in edge delineation during fine segmentation. We conducted experiments on a multicenter NPC dataset, validating that our proposed method is effective and superior to existing state-of-the-art models, possessing considerable clinical application value.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"123 ","pages":"Article 102525"},"PeriodicalIF":5.4,"publicationDate":"2025-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143644206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wensheng Wang , Zewen Jin , Xueli Liu , Xinrong Chen
{"title":"NaMA-Mamba: Foundation model for generalizable nasal disease detection using masked autoencoder with Mamba on endoscopic images","authors":"Wensheng Wang , Zewen Jin , Xueli Liu , Xinrong Chen","doi":"10.1016/j.compmedimag.2025.102524","DOIUrl":"10.1016/j.compmedimag.2025.102524","url":null,"abstract":"<div><div>Artificial intelligence (AI) has shown great promise in analyzing nasal endoscopic images for disease detection. However, current AI systems require extensive expert-labeled data for each specific medical condition, limiting their applications. In this work, the challenge is addressed through two key innovations, the creation of the first large-scale pre-training dataset of nasal endoscopic images, and the development of a novel self-learning AI system specifically designed for nasal endoscopy, named NaMA-Mamba. In the proposed NaMA-Mamba model, two key technologies are utilized, which are the nasal endoscopic state space model (NE-SSM) for analyzing sequences of images and an enhanced learning mechanism (CoMAE) for capturing fine details in nasal tissues. These innovations enable the system to learn effectively from unlabeled images while maintaining high accuracy across different diagnostic tasks. In extensive testing, NaMA-Mamba achieved remarkable results using minimal labeled data, matching the performance of traditional systems that require full expert labeling while needing only 1% of the labeled data for tasks such as detecting nasal polyps and identifying nasopharyngeal cancer. These results demonstrate the potential of NaMA-Mamba to significantly improve the efficiency and accessibility of AI-assisted nasal disease diagnosis in clinical practice.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"122 ","pages":"Article 102524"},"PeriodicalIF":5.4,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143620168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zehang Lin , Yusheng Liu , Jiahua Wu , Da-Han Wang , Xu-Yao Zhang , Shunzhi Zhu
{"title":"Multi-modal pre-post treatment consistency learning for automatic segmentation and evaluation of the Circle of Willis","authors":"Zehang Lin , Yusheng Liu , Jiahua Wu , Da-Han Wang , Xu-Yao Zhang , Shunzhi Zhu","doi":"10.1016/j.compmedimag.2025.102521","DOIUrl":"10.1016/j.compmedimag.2025.102521","url":null,"abstract":"<div><div>The Circle of Willis (CoW) is a crucial vascular structure in the brain, vital for diagnosing vascular diseases. During the acute phase of diseases, CT angiography (CTA) is commonly used to locate occlusions within the CoW quickly. After treatment, MR angiography (MRA) is preferred to visualize postoperative vascular structures, reducing radiation exposure. Clinically, the pre- and post-treatment (P&P-T) changes in the CoW are critical for assessing treatment efficacy. However, previous studies focused on single-modality segmentation, leading to cumulative errors when segmenting CoW in CTA and MRA modalities separately. Thus, it is challenging to differentiate whether changes in the CoW are due to segmentation errors or actual therapeutic effects when evaluating treatment efficacy. To address these challenges, we propose a comprehensive framework integrating the Cross-Modal Semantic Consistency Network (CMSC-Net) for segmentation and the Semantic Consistency Evaluation Network (SC-ENet) for treatment evaluation. Specifically, CMSC-Net includes two key components: the Modality Pair Alignment Module (MPAM), which generates spatially aligned modality pairs (CTA-MRA, MRA-CTA) to mitigate imaging discrepancies, and the Cross-Modal Attention Module (CMAM), which enhances CTA segmentation by leveraging high-resolution MRA features. Additionally, a novel loss function ensures semantic consistency across modalities, supporting stable network convergence. Meanwhile, SC-ENet automates treatment efficacy evaluation by extracting static vascular features and dynamically tracking morphological changes over time. Experimental results show that CTMSC-Net achieves consistent CoW segmentation across modalities, with SC-ENet delivering high-precision treatment evaluation.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"122 ","pages":"Article 102521"},"PeriodicalIF":5.4,"publicationDate":"2025-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143637560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiayu Lei , Lisong Dai , Haoyun Jiang , Chaoyi Wu , Xiaoman Zhang , Yao Zhang , Jiangchao Yao , Weidi Xie , Yanyong Zhang , Yuehua Li , Ya Zhang , Yanfeng Wang
{"title":"UniBrain: Universal Brain MRI diagnosis with hierarchical knowledge-enhanced pre-training","authors":"Jiayu Lei , Lisong Dai , Haoyun Jiang , Chaoyi Wu , Xiaoman Zhang , Yao Zhang , Jiangchao Yao , Weidi Xie , Yanyong Zhang , Yuehua Li , Ya Zhang , Yanfeng Wang","doi":"10.1016/j.compmedimag.2025.102516","DOIUrl":"10.1016/j.compmedimag.2025.102516","url":null,"abstract":"<div><div>Magnetic Resonance Imaging (MRI) has become a pivotal tool in diagnosing brain diseases, with a wide array of computer-aided artificial intelligence methods being proposed to enhance diagnostic accuracy. However, early studies were often limited by small-scale datasets and a narrow range of disease types, which posed challenges in model generalization. This study presents UniBrain, a hierarchical knowledge-enhanced pre-training framework designed for universal brain MRI diagnosis. UniBrain leverages a large-scale dataset comprising 24,770 imaging-report pairs from routine diagnostics for pre-training. Unlike previous approaches that either focused solely on visual representation learning or used brute-force alignment between vision and language, the framework introduces a hierarchical alignment mechanism. This mechanism extracts structured knowledge from free-text clinical reports at multiple granularities, enabling vision-language alignment at both the sequence and case levels, thereby significantly improving feature learning efficiency. A coupled vision-language perception module is further employed for text-guided multi-label classification, which facilitates zero-shot evaluation and fine-tuning of downstream tasks without modifying the model architecture. UniBrain is validated on both in-domain and out-of-domain datasets, consistently surpassing existing state-of-the-art diagnostic models and demonstrating performance on par with radiologists in specific disease categories. It shows strong generalization capabilities across diverse tasks, highlighting its potential for broad clinical application. The code is available at <span><span>https://github.com/ljy19970415/UniBrain</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"122 ","pages":"Article 102516"},"PeriodicalIF":5.4,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143593790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bing Li , Danyang Xu , Hongxin Lin , Ruodai Wu , Songxiong Wu , Jingjing Shao , Jinxiang Zhang , Haiyang Dai , Dan Wei , Bingsheng Huang , Zhenhua Gao , Xianfen Diao
{"title":"Domain adaptive detection framework for multi-center bone tumor detection on radiographs","authors":"Bing Li , Danyang Xu , Hongxin Lin , Ruodai Wu , Songxiong Wu , Jingjing Shao , Jinxiang Zhang , Haiyang Dai , Dan Wei , Bingsheng Huang , Zhenhua Gao , Xianfen Diao","doi":"10.1016/j.compmedimag.2025.102522","DOIUrl":"10.1016/j.compmedimag.2025.102522","url":null,"abstract":"<div><div>Automatic bone tumor detection on radiographs is crucial for reducing mortality from bone cancer. However, the performance of the detection methods may be considerably affected when deployed to bone tumor data in a distinct domain, which could be attributed to the differences in the imaging process and can be solved by training with a large amount of annotated data. However, these data are difficult to obtain in clinical practice. To address this challenge, we propose a domain-adaptive (DA) detection framework to effectively bridge the domain gap of bone tumor radiographs across centers, consisting of four parts: a multilevel feature alignment module (MFAM) for image-level alignment, Wasserstein distance critic (WDC) for quantization of feature distance, instance feature alignment module (IFAM) for instance-level alignment, and consistency regularization module (CRM), which maintains the consistency between the domain predictions of MFAM and IFAM. The experimental results indicated that our framework can improve average precision (AP) with an intersection over union threshold of 0.2 (AP@20) on the source and target domain test sets by 1 % and 8.9 %, respectively. Moreover, we designed a domain discriminator with an attention mechanism to improve the efficiency and performance of the domain-adaptative bone tumor detection model, which further improved the AP@20 on the source and target domain test sets by 2 % and 10.7 %, respectively. The proposed DA model is expected to bridge the domain gap and address the generalization problem across multiple centers.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"123 ","pages":"Article 102522"},"PeriodicalIF":5.4,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143706290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shudong Tan , Jiahui He , Ming Cui , Yuhua Gao , Deyu Sun , Yaoqin Xie , Jing Cai , Nazar Zaki , Wenjian Qin
{"title":"Interstitial-guided automatic clinical tumor volume segmentation network for cervical cancer brachytherapy","authors":"Shudong Tan , Jiahui He , Ming Cui , Yuhua Gao , Deyu Sun , Yaoqin Xie , Jing Cai , Nazar Zaki , Wenjian Qin","doi":"10.1016/j.compmedimag.2025.102520","DOIUrl":"10.1016/j.compmedimag.2025.102520","url":null,"abstract":"<div><div>Automatic clinical tumor volume (CTV) delineation is pivotal to improving outcomes for interstitial brachytherapy cervical cancer. However, the prominent differences in gray values due to the interstitial needles bring great challenges on deep learning-based segmentation model. In this study, we proposed a novel interstitial-guided segmentation network termed advance reverse guided network (ARGNet) for cervical tumor segmentation with interstitial brachytherapy. Firstly, the location information of interstitial needles was integrated into the deep learning framework via multi-task by a cross-stitch way to share encoder feature learning. Secondly, a spatial reverse attention mechanism is introduced to mitigate the distraction characteristic of needles on tumor segmentation. Furthermore, an uncertainty area module is embedded between the skip connections and the encoder of the tumor segmentation task, which is to enhance the model’s capability in discerning ambiguous boundaries between the tumor and the surrounding tissue. Comprehensive experiments were conducted retrospectively on 191 CT scans under multi-course interstitial brachytherapy. The experiment results demonstrated that the characteristics of interstitial needles play a role in enhancing the segmentation, achieving the state-of-the-art performance, which is anticipated to be beneficial in radiotherapy planning.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"123 ","pages":"Article 102520"},"PeriodicalIF":5.4,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143687869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jianfei Liu , Sayantan Bhadra , Omid Shafaat , Pritam Mukherjee , Christopher Parnell , Ronald M. Summers
{"title":"A unified approach to medical image segmentation by leveraging mixed supervision and self and transfer learning (MIST)","authors":"Jianfei Liu , Sayantan Bhadra , Omid Shafaat , Pritam Mukherjee , Christopher Parnell , Ronald M. Summers","doi":"10.1016/j.compmedimag.2025.102517","DOIUrl":"10.1016/j.compmedimag.2025.102517","url":null,"abstract":"<div><div>Medical image segmentation is important for quantitative disease diagnosis and treatment but relies on accurate pixel-wise labels, which are costly, time-consuming, and require domain expertise. This work introduces MIST (MIxed supervision, Self, and Transfer learning) to reduce manual labeling in medical image segmentation. A small set of cases was manually annotated (“strong labels”), while the rest used automated, less accurate labels (“weak labels”). Both label types trained a dual-branch network with a shared encoder and two decoders. Self-training iteratively refined weak labels, and transfer learning reduced computational costs by freezing the encoder and fine-tuning the decoders. Applied to segmenting muscle, subcutaneous, and visceral adipose tissue, MIST used only 100 manually labeled slices from 20 CT scans to generate accurate labels for all slices of 102 internal scans, which were then used to train a 3D nnU-Net model. Using MIST to update weak labels significantly improved nnU-Net segmentation accuracy compared to training directly on strong and weak labels. Dice similarity coefficient (DSC) increased for muscle (89.2 ± 4.3% to 93.2 ± 2.1%), subcutaneous (75.1 ± 14.4% to 94.2 ± 2.8%), and visceral adipose tissue (66.6 ± 16.4% to 77.1 ± 19.0% ) on an internal dataset (<span><math><mrow><mi>p</mi><mo><</mo><mo>.</mo><mn>05</mn></mrow></math></span>). DSC improved for muscle (80.5 ± 6.9% to 86.6 ± 3.9%) and subcutaneous adipose tissue (61.8 ± 12.5% to 82.7 ± 11.1%) on an external dataset (<span><math><mrow><mi>p</mi><mo><</mo><mo>.</mo><mn>05</mn></mrow></math></span>). MIST reduced the annotation burden by 99%, enabling efficient, accurate pixel-wise labeling for medical image segmentation. Code is available at <span><span>https://github.com/rsummers11/NIH_CADLab_Body_Composition</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"122 ","pages":"Article 102517"},"PeriodicalIF":5.4,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143620167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Uma M. Lal-Trehan Estrada , Sunil Sheth , Arnau Oliver , Xavier Lladó , Luca Giancardo
{"title":"Encoding 3D information in 2D feature maps for brain CT-Angiography","authors":"Uma M. Lal-Trehan Estrada , Sunil Sheth , Arnau Oliver , Xavier Lladó , Luca Giancardo","doi":"10.1016/j.compmedimag.2025.102518","DOIUrl":"10.1016/j.compmedimag.2025.102518","url":null,"abstract":"<div><div>We propose learnable 3D pooling (L3P), a CNN module designed to compress 3D information into 2D feature maps using anisotropic convolutions and unidirectional max pooling. Specifically, we used L3P followed by a 2D network to generate predictions from 3D brain CT-Angiography (CTA) in the context of large vessel occlusion (LVO). To further demonstrate its versatility, we extended its application to 3D brain MRI analysis for brain age prediction. First, we designed an experiment to classify the LVO-affected hemisphere (left or right), projecting the input CTA into the sagittal plane, which allowed to assess the ability of L3P to encode the 3D location where the location information was in the 3D-to-2D compression axis. Second, we evaluated the use of L3P on LVO detection as a binary classification (presence or absence). We compared the L3P models performance to that of 2D and stroke-specific 3D models. L3P models achieved results equivalent to stroke-specific 3D models while requiring fewer parameters and resources and provided better results than 2D models using maximum intensity projection images as input. The generalizability of L3P approach was evaluated on the LVO-affected hemisphere detection using data from a single site for training/validation and data from 36 other sites for testing, achieving an AUC of 0.83 on the test set. L3P also performed comparably or better than a fully 3D network on a brain age prediction task with a separate T1 MRI dataset, demonstrating its versatility across different tasks and imaging modalities. Additionally, L3P models generated more interpretable feature maps.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"122 ","pages":"Article 102518"},"PeriodicalIF":5.4,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143580524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}