Antonio Rodríguez-Fernández, Alex van den Berg, Salvatore Luca Cucinella, Joan Lobo-Prat, Josep M Font-Llagunes, Laura Marchal-Crespo
{"title":"沉浸式虚拟现实技术用于学习类似外骨骼的虚拟行走:一项可行性研究。","authors":"Antonio Rodríguez-Fernández, Alex van den Berg, Salvatore Luca Cucinella, Joan Lobo-Prat, Josep M Font-Llagunes, Laura Marchal-Crespo","doi":"10.1186/s12984-024-01482-y","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>Virtual Reality (VR) has proven to be an effective tool for motor (re)learning. Furthermore, with the current commercialization of low-cost head-mounted displays (HMDs), immersive virtual reality (IVR) has become a viable rehabilitation tool. Nonetheless, it is still an open question how immersive virtual environments should be designed to enhance motor learning, especially to support the learning of complex motor tasks. An example of such a complex task is triggering steps while wearing lower-limb exoskeletons as it requires the learning of several sub-tasks, e.g., shifting the weight from one leg to the other, keeping the trunk upright, and initiating steps. This study aims to find the necessary elements in VR to promote motor learning of complex virtual gait tasks.</p><p><strong>Methods: </strong>In this study, we developed an HMD-IVR-based system for training to control wearable lower-limb exoskeletons for people with sensorimotor disorders. The system simulates a virtual walking task of an avatar resembling the sub-tasks needed to trigger steps with an exoskeleton. We ran an experiment with forty healthy participants to investigate the effects of first- (1PP) vs. third-person perspective (3PP) and the provision (or not) of concurrent visual feedback of participants' movements on the walking performance - namely number of steps, trunk inclination, and stride length -, as well as the effects on embodiment, usability, cybersickness, and perceived workload.</p><p><strong>Results: </strong>We found that all participants learned to execute the virtual walking task. However, no clear interaction of perspective and visual feedback improved the learning of all sub-tasks concurrently. Instead, the key seems to lie in selecting the appropriate perspective and visual feedback for each sub-task. Notably, participants embodied the avatar across all training modalities with low cybersickness levels. Still, participants' cognitive load remained high, leading to marginally acceptable usability scores.</p><p><strong>Conclusions: </strong>Our findings suggest that to maximize learning, users should train sub-tasks sequentially using the most suitable combination of person's perspective and visual feedback for each sub-task. This research offers valuable insights for future developments in IVR to support individuals with sensorimotor disorders in improving the learning of walking with wearable exoskeletons.</p>","PeriodicalId":16384,"journal":{"name":"Journal of NeuroEngineering and Rehabilitation","volume":null,"pages":null},"PeriodicalIF":5.2000,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Immersive virtual reality for learning exoskeleton-like virtual walking: a feasibility study.\",\"authors\":\"Antonio Rodríguez-Fernández, Alex van den Berg, Salvatore Luca Cucinella, Joan Lobo-Prat, Josep M Font-Llagunes, Laura Marchal-Crespo\",\"doi\":\"10.1186/s12984-024-01482-y\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Purpose: </strong>Virtual Reality (VR) has proven to be an effective tool for motor (re)learning. Furthermore, with the current commercialization of low-cost head-mounted displays (HMDs), immersive virtual reality (IVR) has become a viable rehabilitation tool. Nonetheless, it is still an open question how immersive virtual environments should be designed to enhance motor learning, especially to support the learning of complex motor tasks. An example of such a complex task is triggering steps while wearing lower-limb exoskeletons as it requires the learning of several sub-tasks, e.g., shifting the weight from one leg to the other, keeping the trunk upright, and initiating steps. This study aims to find the necessary elements in VR to promote motor learning of complex virtual gait tasks.</p><p><strong>Methods: </strong>In this study, we developed an HMD-IVR-based system for training to control wearable lower-limb exoskeletons for people with sensorimotor disorders. The system simulates a virtual walking task of an avatar resembling the sub-tasks needed to trigger steps with an exoskeleton. We ran an experiment with forty healthy participants to investigate the effects of first- (1PP) vs. third-person perspective (3PP) and the provision (or not) of concurrent visual feedback of participants' movements on the walking performance - namely number of steps, trunk inclination, and stride length -, as well as the effects on embodiment, usability, cybersickness, and perceived workload.</p><p><strong>Results: </strong>We found that all participants learned to execute the virtual walking task. However, no clear interaction of perspective and visual feedback improved the learning of all sub-tasks concurrently. Instead, the key seems to lie in selecting the appropriate perspective and visual feedback for each sub-task. Notably, participants embodied the avatar across all training modalities with low cybersickness levels. Still, participants' cognitive load remained high, leading to marginally acceptable usability scores.</p><p><strong>Conclusions: </strong>Our findings suggest that to maximize learning, users should train sub-tasks sequentially using the most suitable combination of person's perspective and visual feedback for each sub-task. This research offers valuable insights for future developments in IVR to support individuals with sensorimotor disorders in improving the learning of walking with wearable exoskeletons.</p>\",\"PeriodicalId\":16384,\"journal\":{\"name\":\"Journal of NeuroEngineering and Rehabilitation\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":5.2000,\"publicationDate\":\"2024-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of NeuroEngineering and Rehabilitation\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.1186/s12984-024-01482-y\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, BIOMEDICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of NeuroEngineering and Rehabilitation","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1186/s12984-024-01482-y","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
Immersive virtual reality for learning exoskeleton-like virtual walking: a feasibility study.
Purpose: Virtual Reality (VR) has proven to be an effective tool for motor (re)learning. Furthermore, with the current commercialization of low-cost head-mounted displays (HMDs), immersive virtual reality (IVR) has become a viable rehabilitation tool. Nonetheless, it is still an open question how immersive virtual environments should be designed to enhance motor learning, especially to support the learning of complex motor tasks. An example of such a complex task is triggering steps while wearing lower-limb exoskeletons as it requires the learning of several sub-tasks, e.g., shifting the weight from one leg to the other, keeping the trunk upright, and initiating steps. This study aims to find the necessary elements in VR to promote motor learning of complex virtual gait tasks.
Methods: In this study, we developed an HMD-IVR-based system for training to control wearable lower-limb exoskeletons for people with sensorimotor disorders. The system simulates a virtual walking task of an avatar resembling the sub-tasks needed to trigger steps with an exoskeleton. We ran an experiment with forty healthy participants to investigate the effects of first- (1PP) vs. third-person perspective (3PP) and the provision (or not) of concurrent visual feedback of participants' movements on the walking performance - namely number of steps, trunk inclination, and stride length -, as well as the effects on embodiment, usability, cybersickness, and perceived workload.
Results: We found that all participants learned to execute the virtual walking task. However, no clear interaction of perspective and visual feedback improved the learning of all sub-tasks concurrently. Instead, the key seems to lie in selecting the appropriate perspective and visual feedback for each sub-task. Notably, participants embodied the avatar across all training modalities with low cybersickness levels. Still, participants' cognitive load remained high, leading to marginally acceptable usability scores.
Conclusions: Our findings suggest that to maximize learning, users should train sub-tasks sequentially using the most suitable combination of person's perspective and visual feedback for each sub-task. This research offers valuable insights for future developments in IVR to support individuals with sensorimotor disorders in improving the learning of walking with wearable exoskeletons.
期刊介绍:
Journal of NeuroEngineering and Rehabilitation considers manuscripts on all aspects of research that result from cross-fertilization of the fields of neuroscience, biomedical engineering, and physical medicine & rehabilitation.