Jiahui Pan, Zhenglang Yang, Qingyu Shen, Man Li, Chunhong Jiang, Yi Li, Yuanqing Li
{"title":"Deep Learning-Augmented Sleep Spindle Detection for Acute Disorders of Consciousness: Integrating CNN and Decision Tree Validation.","authors":"Jiahui Pan, Zhenglang Yang, Qingyu Shen, Man Li, Chunhong Jiang, Yi Li, Yuanqing Li","doi":"10.1109/TBME.2025.3562067","DOIUrl":"https://doi.org/10.1109/TBME.2025.3562067","url":null,"abstract":"<p><p>Sleep spindles, which are key biomarkers of non-rapid eye movement stage 2 sleep, play a crucial role in predicting outcomes for patients with acute disorders of consciousness (ADOC). However, several critical challenges remain in spindle detection: 1) the limited use of automated spindle detection in ADOC; 2) the difficulty in identifying low-frequency spindles in patient populations; and 3) the lack of effective tools for quantitatively analyzing the relationship between spindle density and patient outcomes. To address these challenges, we propose a novel Deep Learning-Augmented algorithm for automated sleep spindle detection in ADOC patients. This method combines Convolutional Neural Networks with decision tree-assisted validation, using wavelet transform principles to enhance detection accuracy and sensitivity, especially for the slow spindles commonly found in ADOC patients. Our approach not only demonstrates superior performance and reliability but also has the potential to significantly improve diagnostic precision and guide treatment strategies when integrated into clinical practice. Our algorithm was evaluated on the Montreal Archive of Sleep Studies - Session 2 (MASS SS2, n = 19), achieving average F1 scores of 0.798 and 0.841 compared to annotations from two experts. On a self-recorded dataset from ADOC patients (n = 24), it achieved an F1 score of 0.745 compared to expert annotations. Additionally, our analysis using the Spearman correlation coefficient revealed a moderate positive correlation between sleep spindle density and 28-day Glasgow Outcome Scale scores in ADOC patients. This suggests that spindle density could serve as a prognostic marker for predicting clinical outcomes and guiding personalized patient care.</p>","PeriodicalId":13245,"journal":{"name":"IEEE Transactions on Biomedical Engineering","volume":"PP ","pages":""},"PeriodicalIF":4.4,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143977074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ufuk Soylu, Varun Chandrasekeran, Gregory J Czarnota, Michael L Oelze
{"title":"Migration of Deep Learning Models Across Ultrasound Scanners.","authors":"Ufuk Soylu, Varun Chandrasekeran, Gregory J Czarnota, Michael L Oelze","doi":"10.1109/TBME.2025.3564567","DOIUrl":"https://doi.org/10.1109/TBME.2025.3564567","url":null,"abstract":"<p><p>A transfer function approach has recently proven effective for calibrating deep learning (DL) algorithms in quantitative ultrasound (QUS), addressing data shifts at both the acquisition and machine levels. Expanding on this approach, we develop a strategy to acquire the functionality of a DL model from one ultrasound machine and implement it on another in a black-box setting, in the context of QUS. This demonstrates the ease with which the functionality of a DL model can be transferred between machines. While the proposed approach can also assist regulatory bodies in comparing and approving DL models, it also highlights the security risks associated with deploying such models in a commercial scanner for clinical use. The method is a black-box unsupervised domain adaptation technique that integrates the transfer function approach with an iterative schema. It does not utilize any information related to model internals but it solely relies on the availability of an input-output interface. Additionally, we assume the availability of unlabeled data from a testing machine. This scenario could become relevant as companies begin deploying their DL functionalities for clinical use. In the experiments, we used a SonixOne and a Verasonics machine. The model was trained on SonixOne data, and its functionality was then transferred to the Verasonics machine. The proposed method successfully transferred the functionality to the Verasonics machine, achieving a remarkable 98 percent classification accuracy in a binary decision task. This study underscores the need to establish security measures prior to deploying DL models in clinical settings.</p>","PeriodicalId":13245,"journal":{"name":"IEEE Transactions on Biomedical Engineering","volume":"PP ","pages":""},"PeriodicalIF":4.4,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143970113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Paul Xing, Antoine Malescot, Eric Martineau, Ravi L Rungta, Jean Provost
{"title":"Inverse Problem Approach to Aberration Correction for in vivo Transcranial Imaging Based on a Sparse Representation of Contrast-enhanced Ultrasound Data.","authors":"Paul Xing, Antoine Malescot, Eric Martineau, Ravi L Rungta, Jean Provost","doi":"10.1109/TBME.2025.3564473","DOIUrl":"https://doi.org/10.1109/TBME.2025.3564473","url":null,"abstract":"<p><strong>Objective: </strong>Transcranial ultrasound imaging is currently limited by attenuation and aberration induced by the skull. First used in contrast-enhanced ultrasound (CEUS), highly echoic microbubbles allowed for the development of novel imaging modalities such as ultrasound localization microscopy (ULM). Herein, we develop an inverse problem approach to aberration correction (IPAC) that leverages the sparsity of microbubble signals.</p><p><strong>Methods: </strong>We propose to use the a priori knowledge of the medium based upon microbubble localization and wave propagation to build a forward model to link the measured signals directly to the aberration function. A standard least-squares inversion is then used to retrieve the aberration function. We first validated IPAC on simulated data of a vascular network using plane wave as well as divergent wave emissions. We then evaluated the reproducibility of IPAC in vivo in 5 mouse brains.</p><p><strong>Results: </strong>We showed that aberration correction improved the contrast of CEUS images by 4.6 dB. For ULM images, IPAC yielded sharper vessels, reduced vessel duplications, and improved the resolution from 21.1 $mu$m to 18.3 $mu$m. Aberration correction also improved hemodynamic quantification for velocity magnitude and flow direction.</p><p><strong>Conclusion: </strong>We showed that IPAC can perform skull-induced aberration correction and improved Power Doppler as well as ULM images acquired on the mouse brain.</p><p><strong>Significance: </strong>This technique is promising for more reliable transcranial imaging of the brain vasculature with potential non-invasive clinical applications.</p>","PeriodicalId":13245,"journal":{"name":"IEEE Transactions on Biomedical Engineering","volume":"PP ","pages":""},"PeriodicalIF":4.4,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144002802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hollan Haule, Ian Piper, Patricia Jones, Tsz-Yan Milly Lo, Javier Escudero
{"title":"CLaI: Collaborative Learning and Inference for low-resolution physiological signals - Validation in clinical event detection and prediction.","authors":"Hollan Haule, Ian Piper, Patricia Jones, Tsz-Yan Milly Lo, Javier Escudero","doi":"10.1109/TBME.2025.3563732","DOIUrl":"https://doi.org/10.1109/TBME.2025.3563732","url":null,"abstract":"<p><p>While machine learning (ML) techniques have been applied to detection and prediction tasks in clinical data, most methods rely on high-resolution data, which is not routinely available in most Intensive Care Units (ICUs), and perform poorly when faced with class imbalance. Here, we introduce and validate Collaborative Learning and Inference (CLaI) for detection and prediction of events from learned latent representations of multivariate physiological time series, leveraging similarities across patients. Our method offers a new way to detect and predict events using low-resolution physiological time series. We evaluate its performance on predicting intracranial hypertension and sepsis using the KidsBrainIT (minute-by-minute resolution) and MIMIC-IV (hourly resolution) datasets, respectively, comparing our approach with classification-based and sequence-to-sequence benchmarks from existing studies. Additional experiments on sepsis detection, robustness to class imbalance, and generalizability-demonstrated via seizure detection using the CHB-MIT scalp electroencephalogram dataset-confirm that CLaI effectively handles class imbalance, consistently achieving competitive performance and the highest F1 score. Overall, our approach introduces a novel method for analyzing routinely collected ICU physiological time series by leveraging patient similarity thus enabling ML interpretability through case-based reasoning.</p>","PeriodicalId":13245,"journal":{"name":"IEEE Transactions on Biomedical Engineering","volume":"PP ","pages":""},"PeriodicalIF":4.4,"publicationDate":"2025-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143984586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"IEEE Transactions on Biomedical Engineering Information for Authors","authors":"","doi":"10.1109/TBME.2025.3556622","DOIUrl":"https://doi.org/10.1109/TBME.2025.3556622","url":null,"abstract":"","PeriodicalId":13245,"journal":{"name":"IEEE Transactions on Biomedical Engineering","volume":"72 5","pages":"C3-C3"},"PeriodicalIF":4.4,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10974417","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143860912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"IEEE Engineering in Medicine and Biology Society Publication Information","authors":"","doi":"10.1109/TBME.2025.3556620","DOIUrl":"https://doi.org/10.1109/TBME.2025.3556620","url":null,"abstract":"","PeriodicalId":13245,"journal":{"name":"IEEE Transactions on Biomedical Engineering","volume":"72 5","pages":"C2-C2"},"PeriodicalIF":4.4,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10974415","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143860907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Miao Feng, Dezhi Yang, Jianing Sun, Jie Lao, Jing Rui, Guoying Gu
{"title":"A Pilot Study on Fabric-based Pneumatic Soft Gloves for Assisting Patients with Severe Brachial Plexus Injury.","authors":"Miao Feng, Dezhi Yang, Jianing Sun, Jie Lao, Jing Rui, Guoying Gu","doi":"10.1109/TBME.2025.3563348","DOIUrl":"https://doi.org/10.1109/TBME.2025.3563348","url":null,"abstract":"<p><strong>Objective: </strong>Robotic gloves show promise in hand assistance due to their wearability and home-based potential, yet empirical research remains limited. This pilot study presents a fabric-based pneumatic soft glove, aiming to identify its potential and challenges in clinical practice by evaluating its effectiveness in assisting patients with severe brachial plexus injury (BPI).</p><p><strong>Methods: </strong>The glove integrates a thumb abduction actuator and four bidirectional fabric-based pneumatic actuators (FPAs) with asymmetric chambers for high output force. Sixteen healthy volunteers and five individuals with BPI, all of whom lacked active hand and wrist movements, were recruited. Participants performed object grasping across 25 cm. The healthy group performed seven tasks using objects weighing up to 2 kg, with muscle activities recorded for analysis. The BPI group further performed tasks with eight objects from the action research arm test (ARAT) and twelve objects for activities of daily living (ADLs), encompassing various sizes, weights, and geometries.</p><p><strong>Results: </strong>In the healthy group, sEMG showed a decrease in 89.3% of trials, with 56.0% of these decreases being significant (p$< $0.01). For BPI group, the range of motion (ROM) improved, ranging from $28.5 pm 7.9^{circ }$ to $63.1 pm 5.1^{circ }$ (thumb) and $10.3 pm 17.5^{circ }$ to $122.5 pm 19.0^{circ }$ (index finger). With a zero baseline for all tasks, their completion rates were $6.8 pm 0.8$ out of 8 for ARAT tasks and $10.0 pm 1.7$ out of 12 for ADLs.</p><p><strong>Conclusion: </strong>The fabric-based pneumatic soft glove significantly enhanced the hand function of patients with severe BPI, demonstrating its potential for hand assistance.</p>","PeriodicalId":13245,"journal":{"name":"IEEE Transactions on Biomedical Engineering","volume":"PP ","pages":""},"PeriodicalIF":4.4,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143993320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Neighbor-Sensitive Multi-Modal Flexible Learning Framework for Improved Prostate Tumor Segmentation in Anisotropic MR Images.","authors":"Runqi Meng, Jingli Chen, Kaicong Sun, Qianqian Chen, Xiao Zhang, Ling Dai, Yuning Gu, Guangyu Wu, Dinggang Shen","doi":"10.1109/TBME.2025.3562766","DOIUrl":"https://doi.org/10.1109/TBME.2025.3562766","url":null,"abstract":"<p><p>Accurate segmentation of prostate tumors from multi-modal magnetic resonance (MR) images is crucial for the diagnosis and treatment of prostate cancer. However, the robustness of existing segmentation methods is limited, mainly because these methods 1) fail to flexibly assess subject-specific information of each MR modality and integrate modality-specific information for accurate tumor delineation, and 2) lack effective utilization of inter-slice information across thick slices in MR images to segment the tumor as a whole 3D volume. In this work, we propose a neighbor-sensitive multi-modal flexible learning network (NesMFle) for accurate prostate tumor segmentation from multi-modal anisotropic MR images. Specifically, we perform multi-modal fusion for each slice by developing a Modality-informativeness Flexible Learning (MFLe) module for selecting and flexibly fusing informative representations of each modality based on inter-modality correlation in a pre-trained manner. After that, we exploit inter-slice feature correlation to derive volumetric tumor segmentation. In particular, we first use a Unet variant equipped with a Sequence Layer, which can coarsely capture slice relationship using 3D convolution and an attention mechanism. Then, we introduce an Activation Mapping Guidance (AMG) module to refine slice-wise representations using information from adjacent slices, ensuring consistent tumor segmentation across neighboring slices based on slice quality assessment on activation maps. Besides, during the network training, we further apply a random mask strategy to each MR modality for improving feature representation efficiency. Experiments on both in-house and public (PICAI) multi-modal prostate tumor datasets demonstrate that our proposed NesMFLe achieves competitive performance compared to state-of-the-art methods.</p>","PeriodicalId":13245,"journal":{"name":"IEEE Transactions on Biomedical Engineering","volume":"PP ","pages":""},"PeriodicalIF":4.4,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144017705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mikhail Mikerov, Koen Michielsen, Nikita Moriakov, Juan J Pautasso, Sjoerd A M Tunissen, Andrew M Hernandez, John M Boone, Ioannis Sechopoulos
{"title":"Empirical motion-artifact reduction for non-rigid motion in dedicated breast CT.","authors":"Mikhail Mikerov, Koen Michielsen, Nikita Moriakov, Juan J Pautasso, Sjoerd A M Tunissen, Andrew M Hernandez, John M Boone, Ioannis Sechopoulos","doi":"10.1109/TBME.2025.3562610","DOIUrl":"https://doi.org/10.1109/TBME.2025.3562610","url":null,"abstract":"<p><strong>Objective: </strong>The goal of this work is to develop a data-driven empirical motion-artifact reduction algorithm for non-rigid motion in dedicated breast CT.</p><p><strong>Methods: </strong>Breast CT is a novel imaging modality that offers fully 3D images at good spatial resolution without breast compression and tissue overlap. However, the slow rotation speed of the gantry in such systems increases the likelihood of motion artifacts. Because of the breast anatomy, motionartifact reduction techniques need to be able to handle artifacts induced by non-rigid motion, which cannot be modeled due to variable motion patterns and the breasts' inner structure, shape, and size. In this work, we present an iterative data-driven empirical algorithm to reduce motion artifacts in breast CT. The highlight of our method is the ability to perform transformations in the image domain using b-spline fields that are defined for each angle and can be efficiently updated with gradient descent and automatic differentiation.</p><p><strong>Result: </strong>We test the method using a simulation study, on physical phantoms, and clinical cases, and show that it can significantly reduce the appearance of motion artifacts.</p><p><strong>Conclusion and significance: </strong>This work introduces a fully data-driven empirical motion-artifact reduction capable of identifying and minimizing motion artifacts without an underlying model of motion.</p>","PeriodicalId":13245,"journal":{"name":"IEEE Transactions on Biomedical Engineering","volume":"PP ","pages":""},"PeriodicalIF":4.4,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144020152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}