Shriram Tallam Puranam Raghu, Dawn T. MacIsaac, Erik J. Scheme
{"title":"通过 VICReg 进行自我监督学习,利用标签不明确的连续数据训练肌电图模式识别能力","authors":"Shriram Tallam Puranam Raghu, Dawn T. MacIsaac, Erik J. Scheme","doi":"arxiv-2409.11632","DOIUrl":null,"url":null,"abstract":"In this study, we investigate the application of self-supervised learning via\npre-trained Long Short-Term Memory (LSTM) networks for training surface\nelectromyography pattern recognition models (sEMG-PR) using dynamic data with\ntransitions. While labeling such data poses challenges due to the absence of\nground-truth labels during transitions between classes, self-supervised\npre-training offers a way to circumvent this issue. We compare the performance\nof LSTMs trained with either fully-supervised or self-supervised loss to a\nconventional non-temporal model (LDA) on two data types: segmented ramp data\n(lacking transition information) and continuous dynamic data inclusive of class\ntransitions. Statistical analysis reveals that the temporal models outperform\nnon-temporal models when trained with continuous dynamic data. Additionally,\nthe proposed VICReg pre-trained temporal model with continuous dynamic data\nsignificantly outperformed all other models. Interestingly, when using only\nramp data, the LSTM performed worse than the LDA, suggesting potential\noverfitting due to the absence of sufficient dynamics. This highlights the\ninterplay between data type and model choice. Overall, this work highlights the\nimportance of representative dynamics in training data and the potential for\nleveraging self-supervised approaches to enhance sEMG-PR models.","PeriodicalId":501034,"journal":{"name":"arXiv - EE - Signal Processing","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Self-Supervised Learning via VICReg Enables Training of EMG Pattern Recognition Using Continuous Data with Unclear Labels\",\"authors\":\"Shriram Tallam Puranam Raghu, Dawn T. MacIsaac, Erik J. Scheme\",\"doi\":\"arxiv-2409.11632\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this study, we investigate the application of self-supervised learning via\\npre-trained Long Short-Term Memory (LSTM) networks for training surface\\nelectromyography pattern recognition models (sEMG-PR) using dynamic data with\\ntransitions. While labeling such data poses challenges due to the absence of\\nground-truth labels during transitions between classes, self-supervised\\npre-training offers a way to circumvent this issue. We compare the performance\\nof LSTMs trained with either fully-supervised or self-supervised loss to a\\nconventional non-temporal model (LDA) on two data types: segmented ramp data\\n(lacking transition information) and continuous dynamic data inclusive of class\\ntransitions. Statistical analysis reveals that the temporal models outperform\\nnon-temporal models when trained with continuous dynamic data. Additionally,\\nthe proposed VICReg pre-trained temporal model with continuous dynamic data\\nsignificantly outperformed all other models. Interestingly, when using only\\nramp data, the LSTM performed worse than the LDA, suggesting potential\\noverfitting due to the absence of sufficient dynamics. This highlights the\\ninterplay between data type and model choice. Overall, this work highlights the\\nimportance of representative dynamics in training data and the potential for\\nleveraging self-supervised approaches to enhance sEMG-PR models.\",\"PeriodicalId\":501034,\"journal\":{\"name\":\"arXiv - EE - Signal Processing\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - EE - Signal Processing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.11632\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - EE - Signal Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11632","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Self-Supervised Learning via VICReg Enables Training of EMG Pattern Recognition Using Continuous Data with Unclear Labels
In this study, we investigate the application of self-supervised learning via
pre-trained Long Short-Term Memory (LSTM) networks for training surface
electromyography pattern recognition models (sEMG-PR) using dynamic data with
transitions. While labeling such data poses challenges due to the absence of
ground-truth labels during transitions between classes, self-supervised
pre-training offers a way to circumvent this issue. We compare the performance
of LSTMs trained with either fully-supervised or self-supervised loss to a
conventional non-temporal model (LDA) on two data types: segmented ramp data
(lacking transition information) and continuous dynamic data inclusive of class
transitions. Statistical analysis reveals that the temporal models outperform
non-temporal models when trained with continuous dynamic data. Additionally,
the proposed VICReg pre-trained temporal model with continuous dynamic data
significantly outperformed all other models. Interestingly, when using only
ramp data, the LSTM performed worse than the LDA, suggesting potential
overfitting due to the absence of sufficient dynamics. This highlights the
interplay between data type and model choice. Overall, this work highlights the
importance of representative dynamics in training data and the potential for
leveraging self-supervised approaches to enhance sEMG-PR models.