Siyi Du, Fei Wang, Guilin Zhou, Jiaqi Li, Lintao Yang, Dongxu Wang
{"title":"Vision-Based Robotic Manipulation of Intelligent Wheelchair with Human-Computer Shared Control","authors":"Siyi Du, Fei Wang, Guilin Zhou, Jiaqi Li, Lintao Yang, Dongxu Wang","doi":"10.1109/CCDC52312.2021.9601850","DOIUrl":null,"url":null,"abstract":"Based on human-computer shared control, this paper introduces a novel robotic manipulation fashion combining computer vision and brain-computer interface (BCI). Designed for disabled groups, the intelligent wheelchair with our proposed method exhibits the precise robotic manipulation ability but also the human decision-making capabilities, which will bring better life quality for the disabled. The overall pipeline includes three parts: asynchronous brain-computer interface based on steady-state visual evoked potential (SSVEP), vision detection with deep network and robotic manipulation of UR5 robot. Particularly, first, the user receives the periodic visual stimulation with different frequencies and then electroencephalography (EEG) signals of the user are collected by EEG cap. Second, we preprocess the EEG signals and extract feature embedding. To judge the frequency of the stimulus signals received by the user, the canonical correlation analysis (CCA) algorithm is used to fit and compare it with the standard EEG signal. In our work, the signals with different frequencies corresponds to different types of objects item by item. Third, we apply the off-the-shelfvision detection algorithm, Mask-RCNN, to output the position of the object corresponding to the detected EEG in the image frame. UR5 robot arm plan manipulation path according to the position of objects transferred by robot operating system (ROS). Extensive experiments show that our method can achieves performance with more than 90% accuracy and the user can control the robot arm to grab the expected object accurately through BCI.","PeriodicalId":143976,"journal":{"name":"2021 33rd Chinese Control and Decision Conference (CCDC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 33rd Chinese Control and Decision Conference (CCDC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CCDC52312.2021.9601850","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Based on human-computer shared control, this paper introduces a novel robotic manipulation fashion combining computer vision and brain-computer interface (BCI). Designed for disabled groups, the intelligent wheelchair with our proposed method exhibits the precise robotic manipulation ability but also the human decision-making capabilities, which will bring better life quality for the disabled. The overall pipeline includes three parts: asynchronous brain-computer interface based on steady-state visual evoked potential (SSVEP), vision detection with deep network and robotic manipulation of UR5 robot. Particularly, first, the user receives the periodic visual stimulation with different frequencies and then electroencephalography (EEG) signals of the user are collected by EEG cap. Second, we preprocess the EEG signals and extract feature embedding. To judge the frequency of the stimulus signals received by the user, the canonical correlation analysis (CCA) algorithm is used to fit and compare it with the standard EEG signal. In our work, the signals with different frequencies corresponds to different types of objects item by item. Third, we apply the off-the-shelfvision detection algorithm, Mask-RCNN, to output the position of the object corresponding to the detected EEG in the image frame. UR5 robot arm plan manipulation path according to the position of objects transferred by robot operating system (ROS). Extensive experiments show that our method can achieves performance with more than 90% accuracy and the user can control the robot arm to grab the expected object accurately through BCI.