Letizia Gionfrida, Richard W Nuckols, Conor J Walsh, Robert D Howe
{"title":"Improved Fascicle Length Estimates From Ultrasound Using a U-net-LSTM Framework.","authors":"Letizia Gionfrida, Richard W Nuckols, Conor J Walsh, Robert D Howe","doi":"10.1109/ICORR58425.2023.10328385","DOIUrl":null,"url":null,"abstract":"<p><p>Brightness-mode (B-mode) ultrasound has been used to measure in vivo muscle dynamics for assistive devices. Estimation of fascicle length from B-mode images has now transitioned from time-consuming manual processes to automatic methods, but these methods fail to reach pixel-wise accuracy across extended locomotion. In this work, we aim to address this challenge by combining a U-net architecture with proven segmentation abilities with an LSTM component that takes advantage of temporal information to improve validation accuracy in the prediction of fascicle lengths. Using 64,849 ultrasound frames of the medial gastrocnemius, we semi-manually generated ground-truth for training the proposed U-net-LSTM. Compared with a traditional U-net and a CNNLSTM configuration, the validation accuracy, mean square error (MSE), and mean absolute error (MAE) of the proposed U-net-LSTM show better performance (91.4%, MSE =0.1± 0.03 mm, MAE =0.2± 0.05 mm). The proposed framework could be used for real-time, closed-loop wearable control during real-world locomotion.</p>","PeriodicalId":73276,"journal":{"name":"IEEE ... International Conference on Rehabilitation Robotics : [proceedings]","volume":"2023 ","pages":"1-6"},"PeriodicalIF":0.0000,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10802115/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE ... International Conference on Rehabilitation Robotics : [proceedings]","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICORR58425.2023.10328385","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Brightness-mode (B-mode) ultrasound has been used to measure in vivo muscle dynamics for assistive devices. Estimation of fascicle length from B-mode images has now transitioned from time-consuming manual processes to automatic methods, but these methods fail to reach pixel-wise accuracy across extended locomotion. In this work, we aim to address this challenge by combining a U-net architecture with proven segmentation abilities with an LSTM component that takes advantage of temporal information to improve validation accuracy in the prediction of fascicle lengths. Using 64,849 ultrasound frames of the medial gastrocnemius, we semi-manually generated ground-truth for training the proposed U-net-LSTM. Compared with a traditional U-net and a CNNLSTM configuration, the validation accuracy, mean square error (MSE), and mean absolute error (MAE) of the proposed U-net-LSTM show better performance (91.4%, MSE =0.1± 0.03 mm, MAE =0.2± 0.05 mm). The proposed framework could be used for real-time, closed-loop wearable control during real-world locomotion.
亮度模式(b模式)超声已被用于测量辅助装置的体内肌肉动力学。从b模式图像中估计束束长度现在已经从耗时的手动过程过渡到自动方法,但这些方法无法在扩展运动中达到逐像素精度。在这项工作中,我们的目标是通过将具有成熟分割能力的U-net架构与LSTM组件相结合来解决这一挑战,LSTM组件利用时间信息来提高预测束长度的验证准确性。利用64,849张内侧腓肠肌的超声图像,我们半手动生成了用于训练所提出的U-net-LSTM的ground-truth。与传统的U-net和CNNLSTM配置相比,本文提出的U-net- lstm的验证精度、均方误差(MSE)和平均绝对误差(MAE)达到了91.4%,MSE =0.1±0.03 mm, MAE =0.2±0.05 mm。所提出的框架可用于现实运动中的实时闭环可穿戴控制。