{"title":"An analytics-driven review of U-Net for medical image segmentation","authors":"Fnu Neha , Deepshikha Bhati , Deepak Kumar Shukla , Sonavi Makarand Dalvi , Nikolaos Mantzou , Safa Shubbar","doi":"10.1016/j.health.2025.100416","DOIUrl":null,"url":null,"abstract":"<div><div>Medical imaging (MI) plays a vital role in healthcare by providing detailed insights into anatomical structures and pathological conditions, supporting accurate diagnosis and treatment planning. Noninvasive modalities, such as X-ray, magnetic resonance imaging (MRI), computed tomography (CT), and ultrasound (US), produce high-resolution images of internal organs and tissues. The effective interpretation of these images relies on the precise segmentation of the regions of interest (ROI), including organs and lesions. Traditional methods based on manual feature extraction are time-consuming, inconsistent, and not scalable. This review explores recent advances in artificial intelligence (AI)-driven segmentation, focusing on Convolutional Neural Network (CNN) architectures, particularly the U-Net family and its variants—U-Net++, and U-Net 3+. These models enable automated, pixel-wise classification across modalities and have improved segmentation accuracy and efficiency. The review outlines the evolution of U-Net architectures, their clinical integration, and offers a modality-wise comparison. It also addresses challenges such as data heterogeneity, limited generalizability, and model interpretability, proposing solutions including attention mechanisms and Transformer-based designs. Emphasizing clinical applicability, this work bridges the gap between algorithmic development and real-world implementation.</div></div>","PeriodicalId":73222,"journal":{"name":"Healthcare analytics (New York, N.Y.)","volume":"8 ","pages":"Article 100416"},"PeriodicalIF":0.0000,"publicationDate":"2025-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Healthcare analytics (New York, N.Y.)","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2772442525000358","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Medical imaging (MI) plays a vital role in healthcare by providing detailed insights into anatomical structures and pathological conditions, supporting accurate diagnosis and treatment planning. Noninvasive modalities, such as X-ray, magnetic resonance imaging (MRI), computed tomography (CT), and ultrasound (US), produce high-resolution images of internal organs and tissues. The effective interpretation of these images relies on the precise segmentation of the regions of interest (ROI), including organs and lesions. Traditional methods based on manual feature extraction are time-consuming, inconsistent, and not scalable. This review explores recent advances in artificial intelligence (AI)-driven segmentation, focusing on Convolutional Neural Network (CNN) architectures, particularly the U-Net family and its variants—U-Net++, and U-Net 3+. These models enable automated, pixel-wise classification across modalities and have improved segmentation accuracy and efficiency. The review outlines the evolution of U-Net architectures, their clinical integration, and offers a modality-wise comparison. It also addresses challenges such as data heterogeneity, limited generalizability, and model interpretability, proposing solutions including attention mechanisms and Transformer-based designs. Emphasizing clinical applicability, this work bridges the gap between algorithmic development and real-world implementation.