Medical & Biological Engineering & Computing最新文献

筛选
英文 中文
C 2 MAL: cascaded network-guided class-balanced multi-prototype auxiliary learning for source-free domain adaptive medical image segmentation. c2mal:级联网络引导的类平衡多原型辅助学习用于无源域自适应医学图像分割。
IF 2.6 4区 医学
Medical & Biological Engineering & Computing Pub Date : 2025-05-01 Epub Date: 2025-01-20 DOI: 10.1007/s11517-025-03287-0
Wei Zhou, Xuekun Yang, Jianhang Ji, Yugen Yi
{"title":"<ArticleTitle xmlns:ns0=\"http://www.w3.org/1998/Math/MathML\">C <ns0:math><ns0:mmultiscripts><ns0:mrow /> <ns0:mrow /> <ns0:mn>2</ns0:mn></ns0:mmultiscripts> </ns0:math> MAL: cascaded network-guided class-balanced multi-prototype auxiliary learning for source-free domain adaptive medical image segmentation.","authors":"Wei Zhou, Xuekun Yang, Jianhang Ji, Yugen Yi","doi":"10.1007/s11517-025-03287-0","DOIUrl":"10.1007/s11517-025-03287-0","url":null,"abstract":"<p><p>Source-free domain adaptation (SFDA) has become crucial in medical image analysis, enabling the adaptation of source models across diverse datasets without labeled target domain images. Self-training, a popular SFDA approach, iteratively refines self-generated pseudo-labels using unlabeled target domain data to adapt a pre-trained model from the source domain. However, it often faces model instability due to incorrect pseudo-label accumulation and foreground-background class imbalance. This paper presents a pioneering SFDA framework, named cascaded network-guided class-balanced multi-prototype auxiliary learning (C <math><mmultiscripts><mrow></mrow> <mrow></mrow> <mn>2</mn></mmultiscripts> </math> MAL), to enhance model stability. Firstly, we introduce the cascaded translation-segmentation network (CTS-Net), which employs iterative learning between translation and segmentation networks to generate accurate pseudo-labels. The CTS-Net employs a translation network to synthesize target-like images from unreliable predictions of the initial target domain images. The synthesized results refine segmentation network training, ensuring semantic alignment and minimizing visual disparities. Subsequently, reliable pseudo-labels guide the class-balanced multi-prototype auxiliary learning network (CMAL-Net) for effective model adaptation. CMAL-Net incorporates a new multi-prototype auxiliary learning strategy with a memory network to complement source domain data. We propose a class-balanced calibration loss and multi-prototype-guided symmetry cross-entropy loss to tackle class imbalance issue and enhance model adaptability to the target domain. Extensive experiments on four benchmark fundus image datasets validate the superiority of C <math><mmultiscripts><mrow></mrow> <mrow></mrow> <mn>2</mn></mmultiscripts> </math> MAL over state-of-the-art methods, especially in scenarios with significant domain shifts. Our code is available at https://github.com/yxk-art/C2MAL .</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":"1551-1570"},"PeriodicalIF":2.6,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143015116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advancement in medical report generation: current practices, challenges, and future directions. 医学报告生成的进展:当前的实践、挑战和未来的方向。
IF 2.6 4区 医学
Medical & Biological Engineering & Computing Pub Date : 2025-05-01 Epub Date: 2024-12-21 DOI: 10.1007/s11517-024-03265-y
Marwareed Rehman, Imran Shafi, Jamil Ahmad, Carlos Osorio Garcia, Alina Eugenia Pascual Barrera, Imran Ashraf
{"title":"Advancement in medical report generation: current practices, challenges, and future directions.","authors":"Marwareed Rehman, Imran Shafi, Jamil Ahmad, Carlos Osorio Garcia, Alina Eugenia Pascual Barrera, Imran Ashraf","doi":"10.1007/s11517-024-03265-y","DOIUrl":"10.1007/s11517-024-03265-y","url":null,"abstract":"<p><p>The correct analysis of medical images requires the medical knowledge and expertise of radiologists to understand, clarify, and explain complex patterns and diagnose diseases. After analyzing, radiologists write detailed and well-structured reports that contribute to the precise and timely diagnosis of patients. However, manually writing reports is often expensive and time-consuming, and it is difficult for radiologists to analyze medical images, particularly images with multiple views and perceptions. It is challenging to accurately diagnose diseases, and many methods are proposed to help radiologists, both traditional and deep learning-based. Automatic report generation is widely used to tackle this issue as it streamlines the process and lessens the burden of manual labeling of images. This paper introduces a systematic literature review with a focus on analyses and evaluating existing research on medical report generation. This SLR follows a proper protocol for the planning, reviewing, and reporting of the results. This review recognizes that the most commonly used deep learning models are encoder-decoder frameworks (45 articles), which provide an accuracy of around 92-95%. Transformers-based models (20 articles) are the second most established method and achieve an accuracy of around 91%. The remaining articles explored in this SLR are attention mechanisms (10), RNN-LSTM (10), Large language models (LLM-10), and graph-based methods (4) with promising results. However, these methods also face certain limitations such as overfitting, risk of bias, and high data dependency that impact their performance. The review not only highlights the strengths and challenges of these methods but also suggests ways to handle them in the future to increase the accuracy and timely generation of medical reports. The goal of this review is to direct radiologists toward methods that lessen their workload and provide precise medical diagnoses.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":"1249-1270"},"PeriodicalIF":2.6,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142871986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine learning models based on FEM simulation of hoop mode vibrations to enable ultrasonic cuffless measurement of blood pressure. 基于有限元法模拟环模振动的机器学习模型,实现超声波无袖套血压测量。
IF 2.6 4区 医学
Medical & Biological Engineering & Computing Pub Date : 2025-05-01 Epub Date: 2025-01-06 DOI: 10.1007/s11517-024-03268-9
Ravinder Kumar, Vishal Kumar, Collin Rich, David Lemmerhirt, Balendra, J Brian Fowlkes, Ashish Kumar Sahani
{"title":"Machine learning models based on FEM simulation of hoop mode vibrations to enable ultrasonic cuffless measurement of blood pressure.","authors":"Ravinder Kumar, Vishal Kumar, Collin Rich, David Lemmerhirt, Balendra, J Brian Fowlkes, Ashish Kumar Sahani","doi":"10.1007/s11517-024-03268-9","DOIUrl":"10.1007/s11517-024-03268-9","url":null,"abstract":"<p><p>Blood pressure (BP) is one of the vital physiological parameters, and its measurement is done routinely for almost all patients who visit hospitals. Cuffless BP measurement has been of great research interest over the last few years. In this paper, we aim to establish a method for cuffless measurement of BP using ultrasound. In this method, the arterial wall is pushed with an acoustic radiation force impulse (ARFI). After the completion of the ARFI pulse, the artery undergoes impulsive unloading which stimulates a hoop mode vibration. We designed two machine learning (ML) models which make it possible to estimate the internal pressure of the artery using ultrasonically measurable parameters. To generate the training data for the ML models, we did extensive finite element method (FEM) eigen frequency simulations for different tubes under pressure by sweeping through a range of values for inner lumen diameter (ILD), tube density (TD), elastic modulus, internal pressure (IP), tube length, and Poisson's ratio. Through image processing applied on images of different eigen modes supported for each simulated case, we identified its hoop mode frequency (HMF). Two different ML models were designed based on the simulated data. One is a four-parameter model (FPM) that takes tube thickness (TT), TD, ILD, and HMF as the inputs and gives out IP as output. Second is a three-parameter model (TPM) that takes TT, ILD, and HMF as inputs and IP as output. The accuracy of these models was assessed using simulated data, and their performance was confirmed through experimental verification on two arterial phantoms across a range of pressure values. The first prediction model (FPM) exhibited a mean absolute percentage error (MAPE) of 5.63% for the simulated data and 3.68% for the experimental data. The second prediction model (TPM) demonstrated a MAPE of 6.5% for simulated data and 8.73% for experimental data. We were able to create machine learning models that can measure pressure within an elastic tube through ultrasonically measurable parameters and verified their performance to be adequate for BP measurement applications. This work establishes a pathway for cuffless, continuous, real-time, and non-invasive measurement of BP using ultrasound.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":"1413-1426"},"PeriodicalIF":2.6,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142933402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Microscopic augmented reality calibration with contactless line-structured light registration for surgical navigation. 显微增强现实校准与非接触式线结构光定位外科导航。
IF 2.6 4区 医学
Medical & Biological Engineering & Computing Pub Date : 2025-05-01 Epub Date: 2025-01-14 DOI: 10.1007/s11517-025-03288-z
Yuhua Li, Shan Jiang, Zhiyong Yang, Shuo Yang, Zeyang Zhou
{"title":"Microscopic augmented reality calibration with contactless line-structured light registration for surgical navigation.","authors":"Yuhua Li, Shan Jiang, Zhiyong Yang, Shuo Yang, Zeyang Zhou","doi":"10.1007/s11517-025-03288-z","DOIUrl":"10.1007/s11517-025-03288-z","url":null,"abstract":"<p><p>The use of AR technology in image-guided neurosurgery enables visualization of lesions that are concealed deep within the brain. Accurate AR registration is required to precisely match virtual lesions with anatomical structures displayed under a microscope. The purpose of this work was to develop a real-time augmented surgical navigation system using contactless line-structured light registration, microscope calibration, and visible optical tracking. Contactless discrete sparse line-structured light point cloud is utilized to construct patient-image registration. Microscope calibration optimization with dimensional invariant calibrator is employed to enable real-time tracking of the microscope. The visible optical tracking integrates a 3D medical model with surgical microscope video in real time, generating an augmented microscope stream. The proposed patient-image registration algorithm yielded an average root mean square error (RMSE) of 0.78 ± 0.14 mm. The pixel match ratio error (PMRE) of the microscope calibration was found to be 0.646%. The RMSE and PMRE of the system experiments are 0.79 ± 0.10 mm and 3.30 ± 1.08%, respectively. Experimental evaluations confirmed the feasibility and efficiency of microscope AR surgical navigation (MASN) registration. By means of registration technology, MASN overlays virtual lesions onto the microscopic view of the real lesions in real time, which can help surgeons to localize lesions hidden deep in tissue.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":"1463-1479"},"PeriodicalIF":2.6,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142980493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Null subtraction imaging combined with modified delay multiply-and-sum beamforming for coherent plane-wave compounding. 零差成像与改进延迟乘和波束形成相结合的相干平面波合成。
IF 2.6 4区 医学
Medical & Biological Engineering & Computing Pub Date : 2025-04-29 DOI: 10.1007/s11517-025-03364-4
Yijun Xu, Yaoting Yue, Hao Wang, Wenting Gu, Boyi Li, Yaqing Chen, Xin Liu
{"title":"Null subtraction imaging combined with modified delay multiply-and-sum beamforming for coherent plane-wave compounding.","authors":"Yijun Xu, Yaoting Yue, Hao Wang, Wenting Gu, Boyi Li, Yaqing Chen, Xin Liu","doi":"10.1007/s11517-025-03364-4","DOIUrl":"https://doi.org/10.1007/s11517-025-03364-4","url":null,"abstract":"<p><p>Coherent plane-wave compounding, while efficient for ultrafast ultrasound imaging, yields lower image quality due to unfocused waves. Delay multiply-and-sum (DMAS) beamformer is one of the representative coherence-based methods which can improve images quality, but suffers from poor speckle quality brought by oversuppression. Current DMAS-based methods involve trade-offs between contrast, resolution, and speckle preservation. To overcome this limitation, a new beamformer method combining the null subtraction imaging (NSI) and DMAS is investigated. The proposed method explores the DMAS on different beamformers which employs NSI and delay and sum (DAS) at receive and do multiply-and-sum on different beamformers across transmitting dimension, thereby simultaneously possessing the speckle quality of DAS and the high resolution of NSI. The effectiveness of the proposed method is evaluated through simulation, phantom, and in vivo datasets. From the experimental study, in comparison with NSI, the proposed method has improved contrast ratio by 10.02%, speckle signal-to-noise ratio by 45.19%, and generalized contrast-to-noise ratio by 12.37%. The method has also improved the full width at half maximum by up to 0.24 mm. The results indicate that the proposed method achieves better resolution and contrast, while also alleviating the issue of excessive compression.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144058444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3D ultrasound shape completion and anatomical feature detection for minimally invasive spine surgery. 微创脊柱手术的三维超声形状完成及解剖特征检测。
IF 2.6 4区 医学
Medical & Biological Engineering & Computing Pub Date : 2025-04-22 DOI: 10.1007/s11517-025-03359-1
Ruixuan Li, Yuyu Cai, Ayoob Davoodi, Gianni Borghesan, Emmanuel Vander Poorten
{"title":"3D ultrasound shape completion and anatomical feature detection for minimally invasive spine surgery.","authors":"Ruixuan Li, Yuyu Cai, Ayoob Davoodi, Gianni Borghesan, Emmanuel Vander Poorten","doi":"10.1007/s11517-025-03359-1","DOIUrl":"https://doi.org/10.1007/s11517-025-03359-1","url":null,"abstract":"<p><p>Ultrasound (US) with 3D reconstruction is being explored to offer a radiation-free approach to visualizing anatomical structures. Such a method could be useful for navigating and assisting minimally invasive spine surgery where direct sight on the surgical site is absent. During surgery, the pre-operative CT model and surgical plans are registered to the patient's anatomy by using intra-operative US reconstruction. However, accurate and automatic registration remains challenging. This difficulty arises from an incomplete detection of the bone geometry in US images and the challenges in identifying anatomical landmarks. To address the problem, this work presents a pipeline to automate the workflow by offering an initial CT-to-US registration. This work utilizes PointAttN for 3D shape completion that completes occluded bone structures from partial US reconstruction. This enriched point cloud is then segmented using PointNet++ to identify specific anatomical features. To train the network, synthetic 3D representations of partial views are generated from fifty CT models of the lumbar spine by simulating US physics, effectively mimicking the intraoperative scenario. The proposed work yields a mean registration error of 1.34 mm and 1.63 mm on real US reconstructions of agar phantoms and an ex vivo human spine, respectively. This comprehensive 3D representation enhances anatomical feature interpretation, enabling robust automatic registration. The clinical potential of this framework merits further investigation in pre-clinical trials.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144046640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ultrasound detection of nonalcoholic steatohepatitis using convolutional neural networks with dual-branch global-local feature fusion architecture. 基于双分支全局-局部特征融合结构的卷积神经网络超声检测非酒精性脂肪性肝炎。
IF 2.6 4区 医学
Medical & Biological Engineering & Computing Pub Date : 2025-04-21 DOI: 10.1007/s11517-025-03361-7
Trina Chattopadhyay, Chun-Hao Lu, Yi-Ping Chao, Chiao-Yin Wang, Dar-In Tai, Ming-Wei Lai, Zhuhuang Zhou, Po-Hsiang Tsui
{"title":"Ultrasound detection of nonalcoholic steatohepatitis using convolutional neural networks with dual-branch global-local feature fusion architecture.","authors":"Trina Chattopadhyay, Chun-Hao Lu, Yi-Ping Chao, Chiao-Yin Wang, Dar-In Tai, Ming-Wei Lai, Zhuhuang Zhou, Po-Hsiang Tsui","doi":"10.1007/s11517-025-03361-7","DOIUrl":"https://doi.org/10.1007/s11517-025-03361-7","url":null,"abstract":"<p><p>Nonalcoholic steatohepatitis (NASH) is a contributing factor to liver cancer, with ultrasound B-mode imaging as the first-line diagnostic tool. This study applied deep learning to ultrasound B-scan images for NASH detection and introduced an ultrasound-specific data augmentation (USDA) technique with a dual-branch global-local feature fusion architecture (DG-LFFA) to improve model performance and adaptability across imaging conditions. A total of 137 participants were included. Ultrasound images underwent data augmentation (rotation and USDA) for training and testing convolutional neural networks-AlexNet, Inception V3, VGG16, VGG19, ResNet50, and DenseNet201. Gradient-weighted class activation mapping (Grad-CAM) analyzed model attention patterns, guiding the selection of the optimal backbone for DG-LFFA implementation. The models achieved testing accuracies of 0.81-0.83 with rotation-based data augmentation. Grad-CAM analysis showed that ResNet50 and DenseNet201 exhibited stronger liver attention. When USDA simulated datasets from different imaging conditions, DG-LFFA (based on ResNet50 and DenseNet201) improved accuracy (0.79 to 0.84 and 0.78 to 0.83), recall (0.72 to 0.81 and 0.70 to 0.78), and F1 score (0.80 to 0.84 for both models). In conclusion, deep architectures (ResNet50 and DenseNet201) enable focused analysis of liver regions for NASH detection. Under USDA-simulated imaging variations, the proposed DG-LFFA framework further improves diagnostic performance.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144036836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated pulmonary nodule classification from low-dose CT images using ERBNet: an ensemble learning approach. 基于ERBNet的低剂量CT图像自动肺结节分类:一种集成学习方法。
IF 2.6 4区 医学
Medical & Biological Engineering & Computing Pub Date : 2025-04-15 DOI: 10.1007/s11517-025-03358-2
Yashar Ahmadyar, Alireza Kamali-Asl, Rezvan Samimi, Hossein Arabi, Habib Zaidi
{"title":"Automated pulmonary nodule classification from low-dose CT images using ERBNet: an ensemble learning approach.","authors":"Yashar Ahmadyar, Alireza Kamali-Asl, Rezvan Samimi, Hossein Arabi, Habib Zaidi","doi":"10.1007/s11517-025-03358-2","DOIUrl":"https://doi.org/10.1007/s11517-025-03358-2","url":null,"abstract":"<p><p>The aim of this study was to develop a deep learning method for analyzing CT images with varying doses and qualities, aiming to categorize lung lesions into nodules and non-nodules. This study utilized the lung nodule analysis 2016 challenge dataset. Different low-dose CT (LDCT) images, including 10%, 20%, 40%, and 60% levels, were generated from the full-dose CT (FDCT) images. Five different 3D convolutional networks were developed to classify lung nodules from LDCT and reference FDCT images. The models were evaluated using 400 nodule and 400 non-nodule samples. An ensemble model was also developed to achieve a generalizable model across different dose levels. The model achieved an accuracy of 97.0% for nodule classification on FDCT images. However, the model exhibited relatively poor performance (60% accuracy) on LDCT images, indicating that dedicated models should be developed for each low-dose level. Dedicated models for handling LDCT led to dramatic increases in the accuracy of nodule classification. The dedicated low-dose models achieved a nodule classification accuracy of 90.0%, 91.1%, 92.7%, and 93.8% for 10%, 20%, 40%, and 60% of FDCT images, respectively. The accuracy of the deep learning models decreased gradually by almost 7% as LDCT images proceeded from 100 to 10%. However, the ensemble model led to an accuracy of 95.0% when tested on a combination of various dose levels. We presented an ensemble 3D CNN classifier for lesion classification, utilizing both LDCT and FDCT images. This model is able to analyze a combination of CT images with different dose levels and image qualities.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144028777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effects of infrapatellar straps on lower limb muscle synergies during running. 髌下绑带对跑步时下肢肌肉协同作用的影响。
IF 2.6 4区 医学
Medical & Biological Engineering & Computing Pub Date : 2025-04-14 DOI: 10.1007/s11517-025-03349-3
Xueying Zhang, Weiyan Ren, Yih-Kuen Jan, Xingyue Wang, Jie Yao, Fang Pu
{"title":"Effects of infrapatellar straps on lower limb muscle synergies during running.","authors":"Xueying Zhang, Weiyan Ren, Yih-Kuen Jan, Xingyue Wang, Jie Yao, Fang Pu","doi":"10.1007/s11517-025-03349-3","DOIUrl":"https://doi.org/10.1007/s11517-025-03349-3","url":null,"abstract":"<p><p>Infrapatellar straps are commonly recommended for treating and preventing running-related knee injuries, and their effects have been investigated at the level of individual muscles. However, the use of straps may influence the neuromuscular control strategies of the knee, and the nervous system controls numerous muscles modularly through muscle synergy. This study aimed to investigate the effects of infrapatellar straps on muscle synergies during running. Kinematic, kinetic, and electromyography data from seventeen participants were recorded during running at self-selected speeds, both with and without infrapatellar straps. Muscle synergies were extracted from electromyography data using non-negative matrix factorization, including the number of modules, dynamic motor control index (DMC), muscle activation combinations, and temporal activation coefficients. Knee flexion angles and extension moments were estimated using OpenSim. Although wearing infrapatellar straps did not affect the number of modules or DMC, knee extensor weightings in the modules associated with the stance phase were reduced with the straps. Additionally, peak temporal activation in the propulsion phase was delayed when wearing the straps. Knee extension moments during the stance phase decreased significantly. While infrapatellar straps did not affect muscle synergy modularity, they altered activation patterns and weightings, suggesting that straps may help reduce quadriceps muscle forces.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144024905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Incorporating frequency domain features into radiomics for improved prognosis of esophageal cancer. 将频域特征纳入食管癌放射组学以改善预后。
IF 2.6 4区 医学
Medical & Biological Engineering & Computing Pub Date : 2025-04-10 DOI: 10.1007/s11517-025-03356-4
Shu Chen, Shumin Zhou, Liyang Wu, Shuchao Chen, Shanshan Liu, Haojiang Li, Guangying Ruan, Lizhi Liu, Hongbo Chen
{"title":"Incorporating frequency domain features into radiomics for improved prognosis of esophageal cancer.","authors":"Shu Chen, Shumin Zhou, Liyang Wu, Shuchao Chen, Shanshan Liu, Haojiang Li, Guangying Ruan, Lizhi Liu, Hongbo Chen","doi":"10.1007/s11517-025-03356-4","DOIUrl":"https://doi.org/10.1007/s11517-025-03356-4","url":null,"abstract":"<p><p>Esophageal cancer is a highly aggressive gastrointestinal malignancy with a poor prognosis, making accurate prognostic assessment essential for patient care. The performance of the esophageal cancer prognosis model based on conventional radiomics is limited, as it mainly characterizes the spatial features such as texture of the tumor area, and cannot fully describe the complexity of esophageal cancer tumors. Therefore, we incorporate the frequency domain features into radiomics to improve the prognostic ability of esophageal cancer. Three hundred fifteen esophageal cancer patients participated in the death risk prediction experiment, with 80% being the training set and 20% being the testing set. We use fivefold cross validation for training, and fuse the 5 trained models through voting to obtain the final prognostic model for testing. The CatBoost achieved the best performance compared to machine learning methods such as random forests and decision tree. The experimental results showed that the combination of frequency domain and radiomics features achieved the highest performance in death predicting esophageal cancer (accuracy: 0.7423, precision: 0.7470, recall: 0.7375, specification: 0.8030, AUC: 0.8487), which was significantly better than the performance of frequency domain or radiomics features alone. The results of Kaplan-Meier survival analysis validated the performance of our method in death predicting esophageal cancer. The proposed method provides technical support for accurate prognosis of esophageal cancer.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144006172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信