{"title":"<ArticleTitle xmlns:ns0=\"http://www.w3.org/1998/Math/MathML\">S <ns0:math><ns0:mmultiscripts><ns0:mrow /> <ns0:mrow /> <ns0:mn>3</ns0:mn></ns0:mmultiscripts> </ns0:math> TU-Net: Structured convolution and superpixel transformer for lung nodule segmentation.","authors":"Yuke Wu, Xiang Liu, Yunyu Shi, Xinyi Chen, Zhenglei Wang, YuQing Xu, ShuoHong Wang","doi":"10.1007/s11517-025-03425-8","DOIUrl":"https://doi.org/10.1007/s11517-025-03425-8","url":null,"abstract":"<p><p>Accurate segmentation of lung adenocarcinoma nodules in computed tomography (CT) images is critical for clinical staging and diagnosis. However, irregular nodule shapes and ambiguous boundaries pose significant challenges for existing methods. This study introduces S<sup>3</sup>TU-Net, a hybrid CNN-Transformer architecture designed to enhance feature extraction, fusion, and global context modeling. The model integrates three key innovations: (1) structured convolution blocks (DWF-Conv/D<sup>2</sup>BR-Conv) for multi-scale feature extraction and overfitting mitigation; (2) S<sup>2</sup>-MLP Link, a spatial-shift-enhanced skip-connection module to improve multi-level feature fusion; and 3) residual-based superpixel vision transformer (RM-SViT) to capture long-range dependencies efficiently. Evaluated on the LIDC-IDRI dataset, S<sup>3</sup>TU-Net achieves a Dice score of 89.04%, precision of 90.73%, and IoU of 90.70%, outperforming recent methods by 4.52% in Dice. Validation on the EPDB dataset further confirms its generalizability (Dice, 86.40%). This work contributes to bridging the gap between local feature sensitivity and global context awareness by integrating structured convolutions and superpixel-based transformers, offering a robust tool for clinical decision support.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144976354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Hard exudates segmentation for retinal fundus images based on longitudinal multi-scale fusion network.","authors":"Shuang Liu, Xiangyu Jiang, Jie Zhang, Wei Zou","doi":"10.1007/s11517-025-03426-7","DOIUrl":"https://doi.org/10.1007/s11517-025-03426-7","url":null,"abstract":"<p><p>Accurate segmentation of hard exudate in fundus images is crucial for early diagnosis of retinal diseases. However, hard exudate segmentation is still a challenge task for accurately detecting small lesions and precisely locating the boundaries of ambiguous lesions. In this paper, the longitudinal multi-scale fusion network (LMSF-Net) is proposed for accurate hard exudate segmentation in fundus images. In this network, an adjacent complementary correction module (ACCM) is proposed on the encoding path for complementary fusion between adjacent encoding features, and a progressive iterative fusion module (PIFM) is designed on the decoding path for fusion between adjacent decoding features. Furthermore, a spatial awareness fusion module (SAFM) is proposed at the end of the decoding path for calibration and aggregation of the two decoding outputs. The proposed method can improve segmentation results of hard exudates with different scales and shapes. The experimental results confirm the superiority of the proposed method for hard exudate segmentation with AUPR of 0.6954, 0.9017, and 0.6745 on the DDR, IDRID, and E-Ophtha EX datasets, respectively.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144838409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ang Li, Long Zhao, Chenyang Wu, Zhanxiao Geng, Lihui Yang, Fei Tang
{"title":"A non-invasive continuous glucose monitoring method based on the Bergman minimal model.","authors":"Ang Li, Long Zhao, Chenyang Wu, Zhanxiao Geng, Lihui Yang, Fei Tang","doi":"10.1007/s11517-025-03422-x","DOIUrl":"https://doi.org/10.1007/s11517-025-03422-x","url":null,"abstract":"<p><p>Currently, non-invasive continuous blood glucose monitoring technology remains insufficient in terms of clinical validation data. Existing approaches predominantly depend on statistical models to predict blood glucose levels, which often suffer from limited data samples. This leads to significant individual differences in non-invasive continuous glucose monitoring, limiting its scope and promotion. We propose a neural network that uses metabolic characteristics as inputs to predict the rate of insulin-facilitated glucose uptake by cells and postprandial glucose gradient changes (glucose gradient: the rate of change of blood glucose concentration within a unit of time (dG/dt), with the unit of mg/(dL × min), reflects the dynamic change trend of blood glucose levels). This neural network utilises non-invasive continuous glucose monitoring method based on the Bergman minimal model (BM-NCGM) while considering the effects of the glucose gradient, insulin action, and the digestion process on glucose changes, achieving non-invasive continuous glucose monitoring. This work involved 161 subjects in a controlled clinical trial, collecting over 15,000 valid data sets. The predictive results of BM-NCGM for glucose showed that the CEG A area accounted for 77.58% and the A + B area for 99.57%. The correlation coefficient (0.85), RMSE (1.48 mmol/L), and MARD (11.51%) showed an improvement of over 32% compared to the non-use of BM-NCGM. The dynamic time warping algorithm was used to calculate the distance between the predicted blood glucose spectrum and the reference blood glucose spectrum, with an average distance of 21.80, demonstrating the excellent blood glucose spectrum tracking ability of BM-NCGM. This study is the first to apply the Bergman minimum model to non-invasive continuous blood glucose monitoring research, supported by a large amount of clinical trial data, bringing non-invasive continuous blood glucose monitoring closer to its true application in daily blood glucose monitoring. CLINICAL TRIAL REGISTRY NUMBER: ChiCTR1900028100.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144785849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Ensemble learning-based method for multiple sclerosis screening from retinal OCT images.","authors":"Yaroub Elloumi, Rostom Kachouri","doi":"10.1007/s11517-025-03410-1","DOIUrl":"https://doi.org/10.1007/s11517-025-03410-1","url":null,"abstract":"<p><p>Multiple sclerosis (MS) is a neurodegenerative disease that impacts retinal layer thickness. Thus, several works proposed to diagnose MS from the retinal optical coherence tomography (OCT) images. Recent clinical studies affirmed that thinning occurs on the four top layers, explicitly in the macular region. However, existing MS detection methods have not considered all MS symptoms, which may impact the MS detection performance. In this research, we propose a new automated method to detect MS from the retinal OCT images. The main principle is based on extracting the relevant retinal layers and figuring out the layer thicknesses, which are investigated to deduce the MS disease. The main challenge is to guarantee a higher performance biomarker extraction within an efficient exploration of OCT cuts. Our contribution consists of the following: (1) employing two DL architectures to segment separately sub-images based on their morphology, in order to enhance segmentation quality; (2) extracting thickness features from the four top layers; (3) dedicating a classifier for each OCT cut that is selected based on its position with respect to the macula center; and (4) merging the classifier knowledge through an ensemble learning approach. Our suggested method achieved 97% accuracy, 100% sensitivity, and 94% precision and specificity, which outperforms several state-of-the-art methods.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144769158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Optimization of CBCT data with image processing methods and production with fused deposition modeling 3D printing.","authors":"Hamdi Sayin, Bekir Aksoy, Koray Özsoy, Derya Yildirim","doi":"10.1007/s11517-023-02889-w","DOIUrl":"10.1007/s11517-023-02889-w","url":null,"abstract":"<p><p>The present study has investigated the effect of the removal of artifacts in cone beam computed tomography (CBCT) images with image processing techniques to dental implant planning. The aim of this study has been to benefit from the novel image processing techniques and additive manufacturing technologies in order to change the existing approach in the usage of the 3D model in the orthogonal surgery, traumatic cases, and tumor operations and to solve the restrictions in surgical operations. In the study, firstly, 3 × 3, 5 × 5, and 7 × 7 kernel values were determined on the CBCT image data of the patient. The determined kernel values were applied on CBCT images by choosing median, median-mean-Gaussian (MMG), and bilateral filters, which are quite successful in removing noise in medical images. A thresholding process to separate teeth and bones from soft tissue regions on CBCT images, histogram normalization for a balanced color distribution, morphology operations to reduce noise areas, and tooth and bone boundaries were determined as closely as possible to patient anatomy. The original image and the images obtained from image enhancement techniques were compared. Results showed that the 3 × 3 median filtering method from three different kernel values out of three different image processing methods used in the study greatly improved the artifacts. It has also been shown that the availability of image processing and additive manufacturing methods on CBCT images has been shown to be a highly important factor before dental surgery planning.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":"2235-2246"},"PeriodicalIF":2.6,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9883862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automatic placement of simulated dental implants within CBCT images in optimum positions: a deep learning model.","authors":"Shahd Alotaibi, Mona Alsomali, Shatha Alghamdi, Sara Alfadda, Isra Alturaiki, Asma'a Al-Ekrish, Najwa Altwaijry","doi":"10.1007/s11517-025-03327-9","DOIUrl":"10.1007/s11517-025-03327-9","url":null,"abstract":"<p><p>Implant dentistry is the standard of care for the replacement of missing teeth. It is a complex process where cone-beam computed tomography (CBCT) images are analyzed by the dentist to determine the implants' length, diameter, and position, and angulation diameter, position, and angulation taking into consideration the prosthodontic treatment plan, bone morphology, and position of adjacent vital anatomical structures. This traditional procedure is time-consuming and relies heavily on the dentist's knowledge and expertise, which makes it subject to human errors. This study presents a two-stage framework for the placement of dental implants. The first stage utilizes YOLOv11 for the detection of fiducial markers and adjacent bone within 2D slices of 3D CBCT images. In the second stage, classification and regression are applied to extract the apical and occlusal coordinates of the implants and to predict the implants' intra-osseous length and intra-osseous diameter. YOLOv11 achieved a 59% F-score in the marker detection phase. The mean absolute error for the implant position prediction ranged from 11.931 to 15.954. The classification of the intra-osseous diameter showed 76% accuracy, and the intra-osseous length showed an accuracy of 59%. Our results were reviewed by an expert prosthodontist and deemed promising.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":"2325-2339"},"PeriodicalIF":2.6,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143504824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chenchu Xu, Yu Chen, Jie Liu, Boyan Wang, Yanping Zhang, Jie Chen, Shu Zhao
{"title":"OCCMNet: Occlusion-Aware Class Characteristic Mining Network for multi-class artifacts detection in endoscopy.","authors":"Chenchu Xu, Yu Chen, Jie Liu, Boyan Wang, Yanping Zhang, Jie Chen, Shu Zhao","doi":"10.1007/s11517-025-03332-y","DOIUrl":"10.1007/s11517-025-03332-y","url":null,"abstract":"<p><p>Multi-class endoscope artifacts detection is crucial for eliminating interference caused by artifacts during clinical examinations and reducing the rate of misdiagnosis and missed diagnoses by physicians. However, this task remains challenging such as data imbalance, similarity, and occlusion among artifacts. To overcome these challenges, we propose an Occlusion-Aware Class Characteristic Mining Network (OCCMNet) to detect eight classes of artifacts in endoscope simultaneously. The OCCMNet comprises the following: (1) A Dual-Branch Class Rebalancing Module (DCRM) rebalances the impact of various classes by fully exploiting the benefits of two complementary data distributions, sampling and detecting from the majority and minority classes respectively. (2) A Class Discrimination Enhancement Module (CDEM) effectively enhances the discrepancy of inter-class by enhance important information and introduce nuance information nonlinearly. (3) A Global Occlusion-Aware Module (GOAM) infers the obscured part of the artifacts by capturing the global information to initially identify the obscured artifacts and combining local details to sense the overall structure of the artifacts. Our OCCMNet has been validated on a public dataset (EndoCV2020). Compared to the latest methods in both medical and computer vision detection, our approach demonstrated 3.5-6.5% improvement in mAP50. The results proved the superiority of our OCCMNet in multi-class endoscopic artifact detection and demonstrated its great potential in reducing clinical interference.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":"2407-2422"},"PeriodicalIF":2.6,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143558461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ETDformer: an effective transformer block for segmentation of intracranial hemorrhage.","authors":"Wanyuan Gong, Yanmin Luo, Fuxing Yang, Huabiao Zhou, Zhongwei Lin, Chi Cai, Youcao Lin, Junyan Chen","doi":"10.1007/s11517-025-03333-x","DOIUrl":"10.1007/s11517-025-03333-x","url":null,"abstract":"<p><p>Intracerebral hemorrhage (ICH) medical image segmentation plays a crucial role in clinical diagnostics and treatment planning. The U-Net architecture, known for its encoder-decoder design and skip connections, is widely used but often struggles with accurately delineating complex struct ures like ICH regions. Recently, transformer models have been incorporated into medical image segmentation, improving performance by capturing long-range dependencies. However, existing methods still face challenges in incorrectly segmenting non-target areas and preserving detailed information in the target region. To address these issues, we propose a novel segmentation model that combines U-Net's local feature extraction with the transformer's global perceptiveness. Our method introduces an External Storage Module (ES Module) to capture and store feature similarities between adjacent slices, and a Top-Down Attention (TDAttention) mechanism to focus on relevant lesion regions while enhancing target boundary segmentation. Additionally, we introduce a boundary DoU loss to improve lesion boundary delineation. Evaluations on the intracranial hemorrhage dataset (IHSAH) from the Second Affiliated Hospital of Fujian Medical University, as well as the publicly available Brain Hemorrhage Segmentation Dataset (BHSD), demonstrate that our approach achieves DSC scores of 91.29% and 85.10% on the IHSAH and BHSD datasets, respectively, outperforming the second-best Cascaded MERIT by 2.19% and 2.05%, respectively. Moreover, our method provides enhanced visualization of lesion details, significantly aiding diagnostic accuracy.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":"2355-2372"},"PeriodicalIF":2.6,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143525059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"TongueTransUNet: toward effective tongue contour segmentation using well-managed dataset.","authors":"Khalid Al-Hammuri, Fayez Gebali, Awos Kanan","doi":"10.1007/s11517-024-03278-7","DOIUrl":"10.1007/s11517-024-03278-7","url":null,"abstract":"<p><p>In modern telehealth and healthcare information systems medical image analysis is essential to understand the context of the images and its complex structure from large, inconsistent-quality, and distributed datasets. Achieving desired results faces a few challenges for deep learning. Examples of these challenges are date size, labeling, balancing, training, and feature extraction. These challenges made the AI model complex and expensive to be built and difficult to understand which made it a black box and produce hysteresis and irrelevant, illegal, and unethical output in some cases. In this article, lingual ultrasound is studied to extract tongue contour to understand language behavior and language signature and utilize it as biofeedback for different applications. This article introduces a design strategy that can work effectively using a well-managed dynamic-size dataset. It includes a hybrid architecture using UNet, Vision Transformer (ViT), and contrastive loss in latent space to build a foundation model cumulatively. The process starts with building a reference representation in the embedding space using human experts to validate any new input for training data. UNet and ViT encoders are used to extract the input feature representations. The contrastive loss was then compared to the new feature embedding with the reference in the embedding space. The UNet-based decoder is used to reconstruct the image to its original size. Before releasing the final results, quality control is used to assess the segmented contour, and if rejected, the algorithm requests an action from a human expert to annotate it manually. The results show an improved accuracy over the traditional techniques as it contains only high quality and relevant features.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":"2295-2309"},"PeriodicalIF":2.6,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143442616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Estimation of pulse wave analysis indices from invasive arterial blood pressure only for a clinical assessment of wave reflection in a 5-day septic animal experiment.","authors":"Diletta Guberti, Manuela Ferrario, Marta Carrara","doi":"10.1007/s11517-025-03328-8","DOIUrl":"10.1007/s11517-025-03328-8","url":null,"abstract":"<p><p>Wave separation analysis (WSA) is the gold standard to analyze the arterial blood pressure (ABP) waveform, decomposing it into a forward and a reflected wave. It requires ABP and arterial blood flow (ABF) measurement, and ABF is often unavailable in clinical settings. Therefore, methods to estimate ABF from ABP have been proposed, but they are not investigated in critical conditions. In this work, an autoregressive with exogenous input model was proposed as an original method to estimate ABF from the measured ABP. Its performance in assessing WSA indices to characterize the arterial tree was evaluated in critical conditions, i.e., during sepsis. The triangular and the personalized flow approximation and the multi-Gaussian ABP decomposition were compared to the proposed model. The results highlighted how the black-box modeling approach is superior to other flow estimation models when computing WSA indices in septic condition. This approach holds promise for overcoming challenges in clinical settings where ABF data are unavailable.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":"2341-2353"},"PeriodicalIF":2.6,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12316715/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143517154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}