Batın Yılmazgün , Jonas Weber , Thorsten Stein , Stefan Sell , Bernd J. Stetter
{"title":"预测三维地面反作用力跨越各种运动任务:卷积神经网络研究比较不同的惯性测量单元配置。","authors":"Batın Yılmazgün , Jonas Weber , Thorsten Stein , Stefan Sell , Bernd J. Stetter","doi":"10.1016/j.jbiomech.2025.112888","DOIUrl":null,"url":null,"abstract":"<div><div>Ground reaction forces (GRFs) are crucial for understanding movement biomechanics and for assessing the load on the musculoskeletal system. While inertial measurement units (IMUs) are increasingly used for gait analysis in natural environments, they cannot directly capture GRFs. Machine learning can be applied to predict 3D-GRFs based on IMU data. However, previous studies mainly focused on vertical GRF (vGRF) and isolated movement tasks. This study aimed to systematically evaluate the prediction accuracy of convolutional neural networks (CNNs) for 3D-GRFs using IMUs from single and multiple sensor configurations across various movement tasks. 20 healthy participants performed six movement tasks including walking, stair ascent, stair descent, running, a running step turn and a running spin turn at self-selected speeds. CNNs were trained to predict 3D-GRFs on IMU time series data for different configurations (lower body [7 IMUs], single leg [4 IMUs], femur-tibia [2 IMUs], tibia [1 IMU] and pelvis [1 IMU]). Prediction accuracies were assessed based on leave-one-subject-out cross validations using Pearson correlation (r) and relative root mean squared error (relRMSE). Across all tasks, CNNs predicted vGRF most accurately (r = 0.98, relRMSE ≤ 7.44 %), followed by anterior-posterior GRF (r ≥ 0.92, relRMSE ≤ 14.24 %), with medial–lateral GRF being the least accurate (r ≥ 0.74, relRMSE ≤ 29.46 %). CNNs predicted vGRF consistently across tasks, with similar accuracy for multi IMU (average r = 0.98, average relRMSE: 6.06 %) and single IMU configurations (average r = 0.98, average relRMSE: 6.88 %), supporting single IMU configurations for vGRF in practical applications. During cutting maneuvers, the lower body configuration reduces the relRMSE for mlGRF (5.23–12.46 %) and apGRF (1.53–3.16 %) compared to single IMU configurations. However, for mlGRF and apGRF during cutting tasks, lower body configuration improve accuracy, highlighting a trade-off between simplicity and performance.</div></div>","PeriodicalId":15168,"journal":{"name":"Journal of biomechanics","volume":"192 ","pages":"Article 112888"},"PeriodicalIF":2.4000,"publicationDate":"2025-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Predicting 3D ground reaction forces across various movement tasks: a convolutional neural network study comparing different inertial measurement unit configurations\",\"authors\":\"Batın Yılmazgün , Jonas Weber , Thorsten Stein , Stefan Sell , Bernd J. Stetter\",\"doi\":\"10.1016/j.jbiomech.2025.112888\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Ground reaction forces (GRFs) are crucial for understanding movement biomechanics and for assessing the load on the musculoskeletal system. While inertial measurement units (IMUs) are increasingly used for gait analysis in natural environments, they cannot directly capture GRFs. Machine learning can be applied to predict 3D-GRFs based on IMU data. However, previous studies mainly focused on vertical GRF (vGRF) and isolated movement tasks. This study aimed to systematically evaluate the prediction accuracy of convolutional neural networks (CNNs) for 3D-GRFs using IMUs from single and multiple sensor configurations across various movement tasks. 20 healthy participants performed six movement tasks including walking, stair ascent, stair descent, running, a running step turn and a running spin turn at self-selected speeds. CNNs were trained to predict 3D-GRFs on IMU time series data for different configurations (lower body [7 IMUs], single leg [4 IMUs], femur-tibia [2 IMUs], tibia [1 IMU] and pelvis [1 IMU]). Prediction accuracies were assessed based on leave-one-subject-out cross validations using Pearson correlation (r) and relative root mean squared error (relRMSE). Across all tasks, CNNs predicted vGRF most accurately (r = 0.98, relRMSE ≤ 7.44 %), followed by anterior-posterior GRF (r ≥ 0.92, relRMSE ≤ 14.24 %), with medial–lateral GRF being the least accurate (r ≥ 0.74, relRMSE ≤ 29.46 %). CNNs predicted vGRF consistently across tasks, with similar accuracy for multi IMU (average r = 0.98, average relRMSE: 6.06 %) and single IMU configurations (average r = 0.98, average relRMSE: 6.88 %), supporting single IMU configurations for vGRF in practical applications. During cutting maneuvers, the lower body configuration reduces the relRMSE for mlGRF (5.23–12.46 %) and apGRF (1.53–3.16 %) compared to single IMU configurations. However, for mlGRF and apGRF during cutting tasks, lower body configuration improve accuracy, highlighting a trade-off between simplicity and performance.</div></div>\",\"PeriodicalId\":15168,\"journal\":{\"name\":\"Journal of biomechanics\",\"volume\":\"192 \",\"pages\":\"Article 112888\"},\"PeriodicalIF\":2.4000,\"publicationDate\":\"2025-08-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of biomechanics\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0021929025004002\",\"RegionNum\":3,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"BIOPHYSICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of biomechanics","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0021929025004002","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"BIOPHYSICS","Score":null,"Total":0}
Predicting 3D ground reaction forces across various movement tasks: a convolutional neural network study comparing different inertial measurement unit configurations
Ground reaction forces (GRFs) are crucial for understanding movement biomechanics and for assessing the load on the musculoskeletal system. While inertial measurement units (IMUs) are increasingly used for gait analysis in natural environments, they cannot directly capture GRFs. Machine learning can be applied to predict 3D-GRFs based on IMU data. However, previous studies mainly focused on vertical GRF (vGRF) and isolated movement tasks. This study aimed to systematically evaluate the prediction accuracy of convolutional neural networks (CNNs) for 3D-GRFs using IMUs from single and multiple sensor configurations across various movement tasks. 20 healthy participants performed six movement tasks including walking, stair ascent, stair descent, running, a running step turn and a running spin turn at self-selected speeds. CNNs were trained to predict 3D-GRFs on IMU time series data for different configurations (lower body [7 IMUs], single leg [4 IMUs], femur-tibia [2 IMUs], tibia [1 IMU] and pelvis [1 IMU]). Prediction accuracies were assessed based on leave-one-subject-out cross validations using Pearson correlation (r) and relative root mean squared error (relRMSE). Across all tasks, CNNs predicted vGRF most accurately (r = 0.98, relRMSE ≤ 7.44 %), followed by anterior-posterior GRF (r ≥ 0.92, relRMSE ≤ 14.24 %), with medial–lateral GRF being the least accurate (r ≥ 0.74, relRMSE ≤ 29.46 %). CNNs predicted vGRF consistently across tasks, with similar accuracy for multi IMU (average r = 0.98, average relRMSE: 6.06 %) and single IMU configurations (average r = 0.98, average relRMSE: 6.88 %), supporting single IMU configurations for vGRF in practical applications. During cutting maneuvers, the lower body configuration reduces the relRMSE for mlGRF (5.23–12.46 %) and apGRF (1.53–3.16 %) compared to single IMU configurations. However, for mlGRF and apGRF during cutting tasks, lower body configuration improve accuracy, highlighting a trade-off between simplicity and performance.
期刊介绍:
The Journal of Biomechanics publishes reports of original and substantial findings using the principles of mechanics to explore biological problems. Analytical, as well as experimental papers may be submitted, and the journal accepts original articles, surveys and perspective articles (usually by Editorial invitation only), book reviews and letters to the Editor. The criteria for acceptance of manuscripts include excellence, novelty, significance, clarity, conciseness and interest to the readership.
Papers published in the journal may cover a wide range of topics in biomechanics, including, but not limited to:
-Fundamental Topics - Biomechanics of the musculoskeletal, cardiovascular, and respiratory systems, mechanics of hard and soft tissues, biofluid mechanics, mechanics of prostheses and implant-tissue interfaces, mechanics of cells.
-Cardiovascular and Respiratory Biomechanics - Mechanics of blood-flow, air-flow, mechanics of the soft tissues, flow-tissue or flow-prosthesis interactions.
-Cell Biomechanics - Biomechanic analyses of cells, membranes and sub-cellular structures; the relationship of the mechanical environment to cell and tissue response.
-Dental Biomechanics - Design and analysis of dental tissues and prostheses, mechanics of chewing.
-Functional Tissue Engineering - The role of biomechanical factors in engineered tissue replacements and regenerative medicine.
-Injury Biomechanics - Mechanics of impact and trauma, dynamics of man-machine interaction.
-Molecular Biomechanics - Mechanical analyses of biomolecules.
-Orthopedic Biomechanics - Mechanics of fracture and fracture fixation, mechanics of implants and implant fixation, mechanics of bones and joints, wear of natural and artificial joints.
-Rehabilitation Biomechanics - Analyses of gait, mechanics of prosthetics and orthotics.
-Sports Biomechanics - Mechanical analyses of sports performance.