{"title":"ToF 3D Vision Detection and Localization of Soft Packaging Bags Based on Deep Learning","authors":"Chengyang Shen, Weidong Chen","doi":"10.1109/RCAR52367.2021.9517566","DOIUrl":"https://doi.org/10.1109/RCAR52367.2021.9517566","url":null,"abstract":"The detection of soft packaging bags is a key step in the process of soft packaging bags unpacking. Time-of-flight(ToF) camera is used as vision sensor and a multi-scale detection method based on deep learning is proposed to solve the deformation and shielding problems of soft packaging bags. This method is improved on the basis of YOLOv3. Aiming at the texture disorder and shape change caused by the deformation of the soft packaging bags, the deformation convolution is used to replace the standard convolution for feature extraction. In view of the slow detection speed of YOLOv3, the inverted residual module based on depth separable convolution is used to replace the residual module. Aiming at the problem of non-detection caused by the occlusion between soft packaging bags, the loss function in YOLOv3 is improved and the size of anchor box is adjusted. The test is carried out when there were different degrees of shielding between the soft packaging bags and different degrees of deformation of the soft packaging bags. The experimental results show that the detection speed of this method reaches 0.8s/frame, the recall rate reaches 99%, and the relative positioning accuracy reaches 2 cm.","PeriodicalId":232892,"journal":{"name":"2021 IEEE International Conference on Real-time Computing and Robotics (RCAR)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125442271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yan Liu, Lan Tian, Yue Zheng, Xiaomeng Zhou, Xiangxin Li, Guanglin Li
{"title":"Toward reducing the effect of force variations on electromyography pattern recognition by Mel-frequency spectrum","authors":"Yan Liu, Lan Tian, Yue Zheng, Xiaomeng Zhou, Xiangxin Li, Guanglin Li","doi":"10.1109/RCAR52367.2021.9517481","DOIUrl":"https://doi.org/10.1109/RCAR52367.2021.9517481","url":null,"abstract":"Currently, the electromyography pattern recognition (EMG-PR) is considered as a promising approach to control the human-machine interaction systems such as multifunctional prostheses. However, the robustness of EMG-PR method is still not strong enough to against some issues such as different arm positions, electrode shift, muscle fatigue and force variation in the clinical application. And among these issues, the force variation is an important problem that greatly affects the performance of EMG-PR based systems. In this study, a feature of log-Mel-frequency spectrum (log-MFS) was proposed to reduce the effects of force variations on the classification performance of the EMG-PR method. Eight channels of EMG signals were recorded from the upper limbs of eight subjects when performing different hand motions at low, medium and high force levels, respectively. Then the proposed feature of log-MFS was extracted from the EMG signals and used to classify the motions. Compared with the commonly used time domain feature set, the feature of log-MFS achieved the higher classification accuracies for all the three force levels. Especially for the un-trained high and low force levels, the average classification accuracies increased by about 27% and 11%. These results demonstrated that the feature of log-MFS is effectiveness to enhance the robustness of the EMG-PR based systems to against force variations in practical application.","PeriodicalId":232892,"journal":{"name":"2021 IEEE International Conference on Real-time Computing and Robotics (RCAR)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115094591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evolving Gaussian Process based Learning of Knee Angle and Velocity","authors":"Jiantao Yang, Yong He, Chen He, Ping-Shan Shi","doi":"10.1109/RCAR52367.2021.9517702","DOIUrl":"https://doi.org/10.1109/RCAR52367.2021.9517702","url":null,"abstract":"Transparent human-exoskeleton interaction requires accurate human joint angle and velocity learning which are regarded as human intent detection to cope with the unspecific and irregular kinematics and dynamics of the system. This paper attempts to address the limitations and deficiencies encountered by traditional methods which make it challengeable to figure out the natural relationships among the strongly coupled multi-source information from each of the human-exoskeleton subsystems. Dependent Gaussian process (DGP) based data fusion algorithm is established and serves as the mathematics foundation to explore the deep layer correlation among the human joint angle, interactive force and processed sEMG achieving satisfactory prediction results of joint angle. Gradient estimation model is then performed to obtain the human joint velocity by differential of a GP model. The statistic nature of the proposed model offers superior flexibility and encouraging human motion prediction results. And the proposed model can achieve human joint angle and velocity learning simultaneously without accessional sensors which may incur marked cost or may be impossible. Experimental works on an in-house exoskeleton which is the key step to verify the superiority of the proposed algorithms are also presented.","PeriodicalId":232892,"journal":{"name":"2021 IEEE International Conference on Real-time Computing and Robotics (RCAR)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128037724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A new structure of end-effector traction upper limb rehabilitation robot","authors":"Liaoyuan Li, Jianhai Han, Xiangpan Li, Bingjing Guo, Peng Xia, Ganqin Du","doi":"10.1109/RCAR52367.2021.9517573","DOIUrl":"https://doi.org/10.1109/RCAR52367.2021.9517573","url":null,"abstract":"Among the rehabilitation therapies for patients with stroke or paralysis, robot-assisted rehabilitation is a research hotspot now. However, there are not many cases of clinical application, mainly because of the complex structure, single function, or high cost. This research proposes a new type of 3-DOF (degrees of freedom) upper limb rehabilitation robot in series structure of with an end-effector, which has 2 degrees of freedom in horizontal rotation and 1 degree of freedom in the vertical movement. It has a compact structure, saves space, is easy to be moved and can realize rehabilitation training in three-dimensional space. It is mainly used for rehabilitation training of the shoulder and elbow joints of the upper limbs. The first two rotary joints are driven by AC servo motors, and the prismatic joint is driven by a single-acting cylinder, which increases the passive compliance of the robot. In addition, a three-dimensional force sensor is added to expand the human-machine interaction channel and increase the active safety of the robot. Through the trajectory tracking and force control test of the robot, it is proved that the device can meet the demand of passive and active upper limb rehabilitation.","PeriodicalId":232892,"journal":{"name":"2021 IEEE International Conference on Real-time Computing and Robotics (RCAR)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128142873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"2021 IEEE International Conference on Real-time Computing and Robotics [Front matter]","authors":"","doi":"10.1109/rcar52367.2021.9517331","DOIUrl":"https://doi.org/10.1109/rcar52367.2021.9517331","url":null,"abstract":"","PeriodicalId":232892,"journal":{"name":"2021 IEEE International Conference on Real-time Computing and Robotics (RCAR)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124956324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kaiwen Xue, Zhixuan Liu, Jiaying Li, Xiaoqiang Ji, Huihuan Qian
{"title":"SongBot: An Interactive Music Generation Robotic System for Non-musicians Learning from A Song","authors":"Kaiwen Xue, Zhixuan Liu, Jiaying Li, Xiaoqiang Ji, Huihuan Qian","doi":"10.1109/RCAR52367.2021.9517454","DOIUrl":"https://doi.org/10.1109/RCAR52367.2021.9517454","url":null,"abstract":"This paper proposes an interactive system for the non-musician learners to get inspired from a song. Differing from complex models of deep learning or simple Markov models sparse of music inter-features, in this research, we unify the composing of a song in a general architecture with music theory, and thus provide a much more understandable view of the music generation for non-musician learners. The proposed model focuses on extracting the extant feature from a target song and recreating different phrases with the representing probabilistic graph underlying the target song based on the relationship among notes in a phrase. Furthermore, an interactive interface between the users and the proposed system is built with a tunable parameter for them to be involved in the music generation and creating procedure. This procedure provides practical experience in aiding the non-musicians to understand and learn from composing a song. Approximately 700 samples of preferences questionnaire survey about the generated music and original music and more than 3000 samples for interactive preferences voting for the tunable parameter have been collected. Quantities of experiments have proved the validation of the proposed system.","PeriodicalId":232892,"journal":{"name":"2021 IEEE International Conference on Real-time Computing and Robotics (RCAR)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131491171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robot-Assisted Haptic Rendering of Bilateral Physical Tasks via Physical Engine","authors":"Yudong Liu, Kaiya Chu, Qing Miao, Mingming Zhang","doi":"10.1109/RCAR52367.2021.9517611","DOIUrl":"https://doi.org/10.1109/RCAR52367.2021.9517611","url":null,"abstract":"Neurological injuries headed by hemiplegia are often the leading cause resulting in coordination decay. Bimanual robotic systems have demonstrated promising efficacy in recovering coordination of bilateral limbs. The main principle that bimanual robots work for physical therapy is through delivering motor tasks to resemble activities of daily life (ADLs). Current evidence also indicates that bimanual training outcomes can be improved by integrating the senses of haptics. However, majority of robotic systems have not been developed with the integration of haptic feedbacks, especially when required to meet the workspace of ADLs. This study sought to address this issue by developing a new haptic-integrated robotic system capable of delivering bilateral tasks. The system is implemented with robotic motion control and a physical engine integrated in Unity3d. Human users are expected to be able to perform bimanual trainings through interacting with dual robotic handles. Experiments were conducted to examine position tracing performance of motion control and haptic transparency of bimanual tasks rendered by the robot. The experimental results led us to believe that the developed robotic system can deliver bilateral physical tasks with haptics integration in an ADL-required workspace. Future work will examine its haptic performance in terms of real force feedback.","PeriodicalId":232892,"journal":{"name":"2021 IEEE International Conference on Real-time Computing and Robotics (RCAR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125489890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A robot assembly framework with “perception-action” mapping cognitive learning","authors":"Fengming Li, Tianyu Fu, G. Chu, R. Song, Yibin Li","doi":"10.1109/RCAR52367.2021.9517353","DOIUrl":"https://doi.org/10.1109/RCAR52367.2021.9517353","url":null,"abstract":"The assembly process is a motion constrained by geometry and environment. The whole assembly process can be described as a series of transitions between contact states. There are many uncertain factors in the actual robot assembly environment, such as parts, robot motion and sensor information. The method with contact state recognition is widely used for assembly. At present, most work is independent for state recognition and action execution. On the one hand, the method of analysis and statistics is used to improve the recognition rate of state without the execution of assembly action. On the other hand, a variety of optimization methods are used to improve the control strategy. In this paper, a cognitive learning framework of “perception-action” mapping learning is proposed, which integrates contact state recognition and assembly action. The cognitive learning model of knowledge description of perception action mapping is constructed. The robot perceives and recognizes the contact state online, and updates the “state-action” experience knowledge base in time. The validity of the algorithm is verified by the example of low-voltage electrical appliance plastic shell assembly. The results show that the cognitive learning method based on “perception-action” mapping can sense the contact state of assembly online, which could accumulate and update experience knowledge base in time.","PeriodicalId":232892,"journal":{"name":"2021 IEEE International Conference on Real-time Computing and Robotics (RCAR)","volume":"4 16","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120928006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Time-frequency decomposition-based weighted ensemble learning for motor imagery EEG classification","authors":"Liangsheng Zheng, Yue Ma, Mengyao Li, Yang Xiao, Wei Feng, Xinyu Wu","doi":"10.1109/RCAR52367.2021.9517593","DOIUrl":"https://doi.org/10.1109/RCAR52367.2021.9517593","url":null,"abstract":"Motor imagery brain-computer interface system based on Electroencephalogram (EEG) is an effective way to help the disabled recover part of their motor abilities. However, decoding the movement intention contained in the EEG signal accurately presents many challenges. In this paper, we propose a time-frequency decomposition-based weighted ensemble learning (TFDWEL) method, which aims to improve the classification performance of motor imagery EEG signals. The TFDWEL method divides the EEG signal into multiple subsets, and uses four time-frequency processing methods to extract the time-frequency sub-bands of each subset. Then the feature extraction model and classifier model of each subset trained by the common spatial pattern (CSP) algorithm and the support vector machine (SVM) algorithm are used to build a set of base learners. The least square error estimation method is used to learn the weight of each base learner, and finally the weighted summation method is used to obtain the final decision. The classification performance of the TFDWEL method is evaluated on the BCI Competition IV Data Set 2b, and the results show that the classification accuracy of 81.58% can be obtained. Superior classification performance indicates that the TFDWEL method can be used in further research to help the rehabilitation of the disabled.","PeriodicalId":232892,"journal":{"name":"2021 IEEE International Conference on Real-time Computing and Robotics (RCAR)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114996267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yang Xu, Yang Xiao, Yue Ma, Liangsheng Zheng, Yongzhi He
{"title":"The Measuring ZMP of Self-Balancing Exoskeleton Robot is Calibrated by Using The Neural Network","authors":"Yang Xu, Yang Xiao, Yue Ma, Liangsheng Zheng, Yongzhi He","doi":"10.1109/RCAR52367.2021.9517509","DOIUrl":"https://doi.org/10.1109/RCAR52367.2021.9517509","url":null,"abstract":"The exoskeleton robot is an auxiliary device to help the disabled people walk, and the self-balancing exoskeleton robot is one which is to keep balance without the assistance of external crutches. In order to keep the balance of the self-balancing exoskeleton robot, it is necessary to get the position of the Zero Moment Point by measuring the pressure of the footplate, and make the position of ZMP in range of supporting area. In this experiment, the footplate is used with the double-deck structure, this structure is compared with the single-deck structure, the double-dack structure will not lose the information of the collected ZMP without direct touch with the sensor, and it is lighter than another structure with dozens of sensors. But there is an inevitable structural coupling in the double-deck structure, which makes the ZMP have a large measurement error. In order to solve this problem, a novel idea is proposed, with the help of the powerful processing and learning capabilities of the neural network, four kinds of neural networks are used to calibrate measured position of ZMP so that reducing error of the measured ZMP. By comparing position of the actual ZMP before and after the calibration with the ideal position of ZMP and computing the errors to judge the effect of the calibration. Through experimental comparison, it is concluded that the different neural networks eliminate error of the measured ZMP in different extent. When the GRNN neural network is used to calibrate position of ZMP, the effect is the most ideal.","PeriodicalId":232892,"journal":{"name":"2021 IEEE International Conference on Real-time Computing and Robotics (RCAR)","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129662453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}