{"title":"基于物体运动图像的移动人形机器人控制","authors":"Eneo Petoku, G. Capi","doi":"10.1109/ICCR55715.2022.10053905","DOIUrl":null,"url":null,"abstract":"Brain-Computer Interface research aims to build systems that can connect the brain to computer or a certain robotic application. The brain activity is solely used to generate commands that can be recognized by the computer. To generate a recognizable brain activity, usually the subject imagines the movements of one's limbs without performing any real movement. In the literature, this paradigm is called Motor Imagery (MI). The subject provides data through a particular recording technology, such as EEG, in a certain time frame, in which the subject forces himself/herself into the feeling of performing a particular action. Each recorded data is linked to a label, and different techniques are used to learn patterns, in order to map them correctly. The goal of this paper is to investigate, whether it is possible to generate similar results as in the case of imagining the movement of limbs, by imagining the movement of an external object. To investigate this, we compare the performance of Motor Imagery and Object Motor Imagery. In the first case the mental task consists of imagining the movements of arms, while in the second the imagining of moving an external box through solely brain activity. A video of a box that moves through a plane in two directions, right, left, is used as visual feedback in both cases. The recorded EEG data are split into training and testing subsets, and are fed to a deep neural network, that tries to learn the different patterns and to classify them. The results show that Object Motor Imagery can achieve better results compared to MI, despite the lack of embodiment and congruity with any daily neural command. The trained architecture is used to control a mobile humanoid, investigating the implementation of Object Motor Movement in robotic application.","PeriodicalId":441511,"journal":{"name":"2022 4th International Conference on Control and Robotics (ICCR)","volume":"7 4","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Mobile Humanoid Robot Control through Object Movement Imagery\",\"authors\":\"Eneo Petoku, G. Capi\",\"doi\":\"10.1109/ICCR55715.2022.10053905\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Brain-Computer Interface research aims to build systems that can connect the brain to computer or a certain robotic application. The brain activity is solely used to generate commands that can be recognized by the computer. To generate a recognizable brain activity, usually the subject imagines the movements of one's limbs without performing any real movement. In the literature, this paradigm is called Motor Imagery (MI). The subject provides data through a particular recording technology, such as EEG, in a certain time frame, in which the subject forces himself/herself into the feeling of performing a particular action. Each recorded data is linked to a label, and different techniques are used to learn patterns, in order to map them correctly. The goal of this paper is to investigate, whether it is possible to generate similar results as in the case of imagining the movement of limbs, by imagining the movement of an external object. To investigate this, we compare the performance of Motor Imagery and Object Motor Imagery. In the first case the mental task consists of imagining the movements of arms, while in the second the imagining of moving an external box through solely brain activity. A video of a box that moves through a plane in two directions, right, left, is used as visual feedback in both cases. The recorded EEG data are split into training and testing subsets, and are fed to a deep neural network, that tries to learn the different patterns and to classify them. The results show that Object Motor Imagery can achieve better results compared to MI, despite the lack of embodiment and congruity with any daily neural command. The trained architecture is used to control a mobile humanoid, investigating the implementation of Object Motor Movement in robotic application.\",\"PeriodicalId\":441511,\"journal\":{\"name\":\"2022 4th International Conference on Control and Robotics (ICCR)\",\"volume\":\"7 4\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 4th International Conference on Control and Robotics (ICCR)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCR55715.2022.10053905\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 4th International Conference on Control and Robotics (ICCR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCR55715.2022.10053905","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Mobile Humanoid Robot Control through Object Movement Imagery
Brain-Computer Interface research aims to build systems that can connect the brain to computer or a certain robotic application. The brain activity is solely used to generate commands that can be recognized by the computer. To generate a recognizable brain activity, usually the subject imagines the movements of one's limbs without performing any real movement. In the literature, this paradigm is called Motor Imagery (MI). The subject provides data through a particular recording technology, such as EEG, in a certain time frame, in which the subject forces himself/herself into the feeling of performing a particular action. Each recorded data is linked to a label, and different techniques are used to learn patterns, in order to map them correctly. The goal of this paper is to investigate, whether it is possible to generate similar results as in the case of imagining the movement of limbs, by imagining the movement of an external object. To investigate this, we compare the performance of Motor Imagery and Object Motor Imagery. In the first case the mental task consists of imagining the movements of arms, while in the second the imagining of moving an external box through solely brain activity. A video of a box that moves through a plane in two directions, right, left, is used as visual feedback in both cases. The recorded EEG data are split into training and testing subsets, and are fed to a deep neural network, that tries to learn the different patterns and to classify them. The results show that Object Motor Imagery can achieve better results compared to MI, despite the lack of embodiment and congruity with any daily neural command. The trained architecture is used to control a mobile humanoid, investigating the implementation of Object Motor Movement in robotic application.