Ye Wang, Gongbing Shan, Hua Li, Ruliang Feng, Yilan Zhang, Guanglin Li, Lin Wang
{"title":"A Deep-Learning-Based Method for Gaining More Biomechanical Parameters with Fewer Sensors in Fast and Complex Movements","authors":"Ye Wang, Gongbing Shan, Hua Li, Ruliang Feng, Yilan Zhang, Guanglin Li, Lin Wang","doi":"10.1109/RCAR54675.2022.9872240","DOIUrl":"https://doi.org/10.1109/RCAR54675.2022.9872240","url":null,"abstract":"Real-time biomechanical feedback can provide direct and objective quantified information for any practitioners, such as the athletes/coaches, to assist in accelerating their motor skills’ learning and training process. However, it is usually difficult to monitor the human motion in full and acquire the key biomechanical parameters (i.e., the kinematic and kinetic data, the EMG, etc.) just with few sensors in some elite sports involving fast movements and complex motor skills. Using too many sensors in the field tests may limit the athletes’ motor ability and affect the collected data’s validity and reliability. In this paper, we employ a deep learning method to immensely reduce the number of sensors required for providing the real-time biomechanical feedback in field, according to the hammer-throw local motion features found from our pilot study. Based on the Keras API imported from the TensorFlow open-source platform, two Sequential Neural Network models are implemented and compared. One model has two inputs (i.e., vertical displacements and velocities on waist) and six outputs (i.e., vital joint angles on lower limbs). The other one has four inputs (i.e., vertical displacements and velocities on wrist and waist) and thirteen outputs (i.e., vital joint angles on both upper and lower limbs). The experimental results demonstrate that the vital joint angles on the upper and lower limbs have strong correlation with the vertical wrist and waist/hip displacements respectively. This study indicates that fewer wearable sensors can be applied in fast and complex movements to obtain the most significant kinematic data, whereas more biomechanical parameters can be further gained by prediction.","PeriodicalId":304963,"journal":{"name":"2022 IEEE International Conference on Real-time Computing and Robotics (RCAR)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132617618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A New Robotic Grasp Detection Method based on RGB-D Deep Fusion*","authors":"Hao Ma, Ding Yuan, Qingke Wang, Hong Zhang","doi":"10.1109/RCAR54675.2022.9872259","DOIUrl":"https://doi.org/10.1109/RCAR54675.2022.9872259","url":null,"abstract":"Grasping is one of the most widely used tasks of robots. The application of computer vision can improve robot intelligence. Previous methods simply treated the problem of robotic grasping detection similar to object detection, which ignores the characteristics of the grasping problem, leading to a loss of accuracy. Additionally, treating depth images equally with RGBs is unreasonable. This study proposes a new grasp detection model using an RGB-D deep fusion module that combines multi-scale RGB and depth features. An adaptive anchor box-setting method based on a two-step approximation was designed. With the network-sharing structures of target and grasp detection, the target category and appropriate grasp posture can be obtained end-to-end. Experiments show that compared with other models, ours achieves significant improvement in accuracy while maintaining real-time computing performance.","PeriodicalId":304963,"journal":{"name":"2022 IEEE International Conference on Real-time Computing and Robotics (RCAR)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132268530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"CP+: Camera Poses Augmentation with Large-scale LiDAR Maps","authors":"Jiadi Cui, S. Schwertfeger","doi":"10.1109/RCAR54675.2022.9872176","DOIUrl":"https://doi.org/10.1109/RCAR54675.2022.9872176","url":null,"abstract":"Large-scale colored point clouds have many advantages in navigation or scene display. Relying on cameras and LiDARs, which are now widely used in reconstruction tasks, it is possible to obtain such colored point clouds. However, the information from these two kinds of sensors is not well fused in many existing frameworks, resulting in poor colorization results, thus resulting in inaccurate camera poses and damaged point colorization results. We propose a novel framework called Camera Pose Augmentation (CP+) to improve the camera poses and align them directly with the LiDAR-based point cloud. Initial coarse camera poses are given by LiDAR-Inertial or LiDAR-Inertial-Visual Odometry with approximate extrinsic parameters and time synchronization. The key steps to improve the alignment of the images consist of selecting a point cloud corresponding to a region of interest in each camera view, extracting reliable edge features from this point cloud, and deriving 2D-3D line correspondences which are used towards iterative minimization of the re-projection error.","PeriodicalId":304963,"journal":{"name":"2022 IEEE International Conference on Real-time Computing and Robotics (RCAR)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134531189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhe Chen, Tao Sun, Zihou Wei, Xie Chen, S. Shimoda, Toshio Fukuda, Qiang Huang, Qing Shi
{"title":"A real-time neuro-robot system for robot state control","authors":"Zhe Chen, Tao Sun, Zihou Wei, Xie Chen, S. Shimoda, Toshio Fukuda, Qiang Huang, Qing Shi","doi":"10.1109/RCAR54675.2022.9872184","DOIUrl":"https://doi.org/10.1109/RCAR54675.2022.9872184","url":null,"abstract":"Embodying an in vitro biological neural network (BNN) with a robot body to achieve in vitro biological intelligence has been attracting increasing attention in the fields of neuroscience and robotics. As a step forward toward this aim, here we propose a real-time neuro-robot system based on calcium recording, which consists of a modular BNN and a simulated mobile robot. In this system, the neural signal of the BNN is recorded, analyzed, and decoded to control the motion state of the mobile robot in real-time. The sensor data of the robot is encoded and transmitted to control an electrical pump. The electrical pump is included in the system to estimate the real-time performance of the system. An obstacle avoidance task is chosen as proof-of-concept experiments. In the experiments, a calcium recording video of a BNN is replayed to emulate the real-time video stream. The video is monitored and analyzed by a custom-made graphical user interface (GUI) to control the robot motion state and the electrical pump. Experimental results demonstrate that the proposed neuro-robot system can control the robot motion state in real-time. In the future, we will connect the electrical pump to the BNN and transmit the signal from the robot to the BNN by applying local drug stimulation, therefore realizing a closed-loop neuro-robot system.","PeriodicalId":304963,"journal":{"name":"2022 IEEE International Conference on Real-time Computing and Robotics (RCAR)","volume":"62 10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116816345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A New EEG-based Paradigm for Classifying Intention of Compound-Limbs Movement","authors":"Rui Ma, Yichuan Jiang, Yifeng Chen, Mingming Zhang","doi":"10.1109/RCAR54675.2022.9872213","DOIUrl":"https://doi.org/10.1109/RCAR54675.2022.9872213","url":null,"abstract":"Traditional lower limb exoskeleton robots utilize electromechanical control panels or buttons to assist patients with physical disabilities, which is a passive training way of rehabilitation. Over the past few years, extensive research has been conducted on brain-controlled lower limb exoskeleton robot technology combined with an electroencephalogram (EEG) signals. However, the way most paradigms are designed does not conform to the natural walking posture of human beings. In this study, a new EEG-based paradigm is proposed for detecting the intention of compound-limbs movement, which is closer to human walking posture. The time-frequency analysis presents that there showed stronger event-related desynchronization (ERD) at the main channels. Besides, the brain topographical distribution shows that the ERD not only exists in the contralateral sensorimotor area, but also appears on the central parietal lobe region (the leg motion mapping region), which initially verified the possibility of differentiating this pattern. Then, after extracting time-frequency-spatial features by common spatial pattern method, three supervised machine learning algorithms are used to classify the compound limb movement. The results demonstrate that the classification performance of compound-limbs movement mode are much higher than that of single-leg movement (>20%). This research introduces a new paradigm for classifying lower-limbs related movement intention, which might help control the lower limbs exoskeleton with subjects’ voluntary intention and improve the effect of human-machine interface system.","PeriodicalId":304963,"journal":{"name":"2022 IEEE International Conference on Real-time Computing and Robotics (RCAR)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124876903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Improved Adaptive Controller Design Scheme to Attenuate the Noise Amplification Effect of Differentials*","authors":"Zhi Fa, Yongchun Fang, Yinan Wu, Chun-Shan Liu","doi":"10.1109/RCAR54675.2022.9872239","DOIUrl":"https://doi.org/10.1109/RCAR54675.2022.9872239","url":null,"abstract":"Adaptive control is widely used in nonlinear systems, and is under active research. Although the conventional adaptive control guarantees the asymptotic stability, it may yield limited performance in real experiments due to many factors, such as noise and hardware limitations. In this paper, a proportional-integral adaptive controller with a proportional-integral-derivative update law is proposed to attenuate the effect of measurement noise and improve the performance of the closed-loop system. Two more improvements, dividing differential signals into two smoother signals and facilitating additional estimations to weight different components of the same signal, are used in addition to the proportional-integral-derivative-type update law. The stability is proved theoretically and the performance is verified by simulation tests.","PeriodicalId":304963,"journal":{"name":"2022 IEEE International Conference on Real-time Computing and Robotics (RCAR)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117039458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sijie Li, Yating Li, Zhengxing Wu, Jian Wang, Min Tan
{"title":"3-D Path Following Control for a Miniature Maneuverable Robotic Fish with Hybrid Actuators","authors":"Sijie Li, Yating Li, Zhengxing Wu, Jian Wang, Min Tan","doi":"10.1109/RCAR54675.2022.9872178","DOIUrl":"https://doi.org/10.1109/RCAR54675.2022.9872178","url":null,"abstract":"Recent developments in biomimetic technology show great potential to improve the locomotion capability of underwater robots. This paper presents a three-dimensional (3-D) path following control method for a miniature robotic tiger fish featuring hybrid propulsion, i.e., fishlike swimming and marine propellers. With the full consideration of the bionic propulsion and propellers, a hybrid 3-D dynamic model with reasonable assumptions is firstly established relying on the parameter identification. On this basis, a 3-D path following controller is proposed through taking full advantage of both fishlike swimming and propeller propulsion, whose stability is proven using the Lyapunov function. Finally, extensive aquatic experimental results validate the effectiveness of the proposed prototype and methods, offering valuable guidance for the development of high-performance miniature robotic fish in a hybrid-driven way.","PeriodicalId":304963,"journal":{"name":"2022 IEEE International Conference on Real-time Computing and Robotics (RCAR)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121254709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Efficient Learning Based Autonomous Exploration Algorithm For Mobile Robots*","authors":"Zhiwei Xing, Jintao Wang, Xiaorui Zhu","doi":"10.1109/RCAR54675.2022.9872229","DOIUrl":"https://doi.org/10.1109/RCAR54675.2022.9872229","url":null,"abstract":"In this paper, a novel autonomous exploration algorithm is proposed to achieve efficient exploration task of an unknown environment in terms of the shortest path. First, a new neural network based on the variational autoencoder, LMPnet, is proposed to predict a series of local maps with projected obstacles of unknown areas. Then, a deep Q-network with long-short term memory (LSTM) structure, ETPNet, is proposed to generate piecewise local target points based on the predicted local maps where the reward function is designed to favor shorter length of the local path and larger information gain. Experimental results demonstrate that the proposed algorithm achieves good performance in reducing exploration time.","PeriodicalId":304963,"journal":{"name":"2022 IEEE International Conference on Real-time Computing and Robotics (RCAR)","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122968322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Medicine bottle recognition based on machine vision and deep learning in intravenous medicine dispensing robot*","authors":"Haiyang Jin, Zhen Teng, Yucheng He, Qi Chen, Ruiqiang Wang, Ying Hu","doi":"10.1109/RCAR54675.2022.9872280","DOIUrl":"https://doi.org/10.1109/RCAR54675.2022.9872280","url":null,"abstract":"The intravenous drug dispensing robot needs to handle as many as hundreds of drugs and hundreds of different types of drugs. The identification and measurement of different types of medicine bottles is the key to precise dispensing. In this paper, aiming at different tasks of intravenous drug dispensing robot, such as the classification of drug bottle types and the measurement of key dimensions of drug bottles, a machine vision method and deep learning framework YoloV5s are adopted to realize an efficient and stable drug bottle identification and measurement method. Combined with the measurement accuracy of the machine vision method and the insensitivity of the deep learning method to the background and light source, the accurate identification and measurement of the medicine bottle in the different dispensing process are realized. Finally, the effects of background and ambient light sources on the recognition results are quantitatively tested through experiments.","PeriodicalId":304963,"journal":{"name":"2022 IEEE International Conference on Real-time Computing and Robotics (RCAR)","volume":"768 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123901219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Research on Current Automated Driving ODD Regulations, Standards and Applications","authors":"Chen Chen, Qidong Zhao, Zheng Tong, Zhai Yang, Xianglei Zhu","doi":"10.1109/RCAR54675.2022.9872246","DOIUrl":"https://doi.org/10.1109/RCAR54675.2022.9872246","url":null,"abstract":"ODD, short for Operational Design Domain, is fundamental to automated driving technology R&D. A reasonable and well-defined ODD is the prerequisite for the realization of automated driving function safety. In this paper, UN regulations and international standards (including drafts) publicly released in recent years, and presentation from important organizations or companies in the industry are thoroughly reviewed to perform a prospective analysis from four aspects, namely, concept and definition of ODD, responsible subject for ODD description, ODD description methods and contents, and ODD application cases.","PeriodicalId":304963,"journal":{"name":"2022 IEEE International Conference on Real-time Computing and Robotics (RCAR)","volume":"105 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117110332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}