Mengjun Fang, Peng Li, Le Wei, Xuebin Hou, Xinguang Duan
{"title":"Voice Control of a Robotic Arm for Hysterectomy and Its Optimal Pivot Selection","authors":"Mengjun Fang, Peng Li, Le Wei, Xuebin Hou, Xinguang Duan","doi":"10.1109/RCAR47638.2019.9043990","DOIUrl":"https://doi.org/10.1109/RCAR47638.2019.9043990","url":null,"abstract":"This paper presents a method to recognize the voice command which is using for control a rbototic arm for hysterectomy. We extract MFCCs (Mel Frequency Cepstrum Coefficients) characteristic parameters as the original input, then put it into the CNNs (Convolutional Neural Networks) model after specific processing. After obtain the speech recognition model, we input the voice of command generate by a operator and then it would predicted a voice command and take corresponding action on robot. The plantform we used to verify our model is a 6-DOF manipulator. In order to promote maneuverability of this robot, we adopt a method to optimize the selection of Remote Center of Motion (RCM). Experiments show that this speech recognition meodel based on CNNs is fulfill the requirment of surgery and controling robot by its command is feasible.","PeriodicalId":314270,"journal":{"name":"2019 IEEE International Conference on Real-time Computing and Robotics (RCAR)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121241185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"UAV Path Planning Based on Biological Excitation Neural Network and Visual Odometer","authors":"Ye-jian Li, Yong Liu","doi":"10.1109/RCAR47638.2019.9043987","DOIUrl":"https://doi.org/10.1109/RCAR47638.2019.9043987","url":null,"abstract":"Unmanned aerial vehicle(UAV) have been widely used in military and civil fields due to their compact structure, flexible mobility, low cost and other advantages. With the development of artificial intelligence in recent years, more intelligent and advanced algorithms have appeared, in which machine vision, as an important branch in the field of artificial intelligence, has also been greatly developed. The limitation of space, load, endurance and computing capacity hinders the application of intelligent algorithms on UAV. In the paper a semi-autonomous control platform of the quadrotor UAV was developed and the upper and lower dual control core architecture is implemented. Based on the hardware platform, the improved visual inertia odometer (VIO) and the biological excitation neural network are used to improve the flight performance and the ability of autonomy. To solve the problem of the synchronization for VIO, a cubic spline interpolation function was employed. A biological excitation neural network was extended to solve UAV on-line path planning. It provides an on-board path planning approach for UAV in the 3D world considering the dynamic obstacles. Finally, the feasibility and stability of the designed system were verified by flight experiments.","PeriodicalId":314270,"journal":{"name":"2019 IEEE International Conference on Real-time Computing and Robotics (RCAR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129745595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A non-tachometer method for Order Tracking technique in NVH analysis based on Deep Learning and rpm estimation","authors":"Yue Zhang, Tianqi Shao, Liucun Zhu, Zhen Zhang, Wenbin Xie","doi":"10.1109/RCAR47638.2019.9044105","DOIUrl":"https://doi.org/10.1109/RCAR47638.2019.9044105","url":null,"abstract":"This paper presents a new analysis method of automobile noise-vibration-harshness(NVH) analysis based on a discrete recurrent neural network(RNN) and generative adversarial network(GAN), which, can not only replace Short Fast Fourier Transform(SFFT) but also the entire tachometer data assembly system for our network's ability to obtain rpm from vibration signal. This method inherits the leading spirit of digital resampling and Time-Variant Discrete Fourier Transform(TVDFT), adjusting sampling rate concerning rpm changes and interpolation to obtain an equal time interval sequence out of identical angle interval sequence, as the setting parameter of these methods determines the quality of order tracking. The neural-network-based approach involves three steps: 1. Simulation and sampling of the vibration signal of a DeLaval rotor. 2. Determination of rpm, and instantaneous sampling rate, window size as well as resampling time and values through a discrete RNN-GAN learning system with the input vibration signal and output parameters. 3. Illustration of a dB-rpm graph obtained by D-RNN-GAN and further evaluation of system performance 4. The application to big data and its review.","PeriodicalId":314270,"journal":{"name":"2019 IEEE International Conference on Real-time Computing and Robotics (RCAR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129030916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Ai, G. Ren, Xuan Sun, Honghua Zhao, Liyong Tan, Quancheng Dong
{"title":"Research on omnidirectional mobile robot motion control based on integration of traction and steering wheel","authors":"C. Ai, G. Ren, Xuan Sun, Honghua Zhao, Liyong Tan, Quancheng Dong","doi":"10.1109/RCAR47638.2019.9044114","DOIUrl":"https://doi.org/10.1109/RCAR47638.2019.9044114","url":null,"abstract":"In order to solve the automatic transportation of heavy materials under the limited working space of production workshops and warehouses, two sets of heavy-duty omnidirectional mobile robot motion control systems with steering wheel drive units were designed. The steering wheel combination drive unit of the “walking + steering” set is used to build the mobile robot chassis, and the mechatronics servo system and mathematical model of multi-motor coordinated motion are constructed. The communication between the controller and the steering wheel combination drive unit is established through the CAN bus. The specific implementation is to capture and analyze the control signal through the controller to obtain the desired motion mode, to obtain the motion of each set of steering wheel unit through the mathematical model, and to realize the desired motion through the synthesis of each set of steering wheel unit motion. It has been verified by experiments that the two sets of steering wheel unit-driven mobile robot control system realizes the zero turning radius, 360-degree omnidirectional movement of the robot and rotation during the movement. It can be used for flexible work in tight spaces.","PeriodicalId":314270,"journal":{"name":"2019 IEEE International Conference on Real-time Computing and Robotics (RCAR)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124705132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Practical Framework for Automatic 3D Reconstruction of Clothing with RGB-D Cameras","authors":"Hongguang Chang, Zhan Song, Juan Zhao","doi":"10.1109/RCAR47638.2019.9043988","DOIUrl":"https://doi.org/10.1109/RCAR47638.2019.9043988","url":null,"abstract":"Accurate 3D modelling of real clothing is of great significance for many applications. However, most existing methods of acquiring these 3D models are too expensive and non-automatic. In this paper, we propose a practical framework for the automatic 3D reconstruction of clothing by scanning real clothes using a motorized rotation stage and three RGB-D cameras (Intel RealSense Depth Camera D415). The framework consists of three parts, including data acquisition module, point cloud process module and model reconstruction module. Users can easily use it to reconstruct clothes because it works automatically and quickly. Experimental results show that the proposed system and method can obtain 3D models of clothing with satisfied 3D mesh and texture. This research provides manufacturers a fast and economical way to create 3D clothing models for applications like virtual reality display and online shopping etc.","PeriodicalId":314270,"journal":{"name":"2019 IEEE International Conference on Real-time Computing and Robotics (RCAR)","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124758103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Design and implementation of a High-speed Lidar Data Reading System based on FPGA","authors":"Tian Sun, Yong Liu, Yujie Wang","doi":"10.1109/RCAR47638.2019.9044143","DOIUrl":"https://doi.org/10.1109/RCAR47638.2019.9044143","url":null,"abstract":"Currently, research on the use of lidar data in various applications is very popular. These studies are almost all based on non-real-time operating systems with random delays, which leads to lag in the received data and inaccuracies in decision making. This paper employs FPGA to propose a method for reading lidar data in high-speed on an FPGA using VHDL. Moreover, the proposed method uses a TCP/IP module to enable the FPGA to communicate with the lidar, avoiding the overly complex TCP protocol design inside the FPGA. The overall design, individual blocks inside the FPGA, and the connections among and role of each port are described. Experimental results demonstrate that the lidar data are correctly read by the FPGA board. In addition, the time required for FPGA to read a lidar message for a circle scan was calculated to be about $1.033 mumathrm{s}$. The proposed approach provides a very useful basic platform for many applications that use lidar as a sensor and will improve their detection accuracy.","PeriodicalId":314270,"journal":{"name":"2019 IEEE International Conference on Real-time Computing and Robotics (RCAR)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124961156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Visual SLAM Based on Geometric Cluster Matching","authors":"Chaoran Tian, Y. Ou, Yangyang Qu","doi":"10.1109/RCAR47638.2019.9044135","DOIUrl":"https://doi.org/10.1109/RCAR47638.2019.9044135","url":null,"abstract":"The approach of feature-based visual SLAM is very popular in Micro Aerial Vehicle (MAV) navigation and wheeled mobile robot (WMR) navigation. Most modern feature-based visual SLAM frameworks extract a fixed number of feature points and then match the feature points by brute force or adopt Fast Library for Approximate Nearest Neighbors (FLANN). The features' extracting and matching processes will determine the localization accuracy and mapping quality significantly. In this paper, we propose a feature matching algorithm for the robot equipped with a binocular camera, which provides accurate real-time localization performance and robustness. In our feature matching method, both the geometric relationship of image feature points and the similarity of descriptor are considered. This paper presents a visual SLAM framework based on geometric cluster matching and the effectiveness of the proposed method is verified by practical experiments.","PeriodicalId":314270,"journal":{"name":"2019 IEEE International Conference on Real-time Computing and Robotics (RCAR)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129637504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Fully Automatic Framework to Localize Esophageal Tumor for Radiation Therapy","authors":"Haipei Ren, Teng Li, Yuwei Pang","doi":"10.1109/RCAR47638.2019.9043973","DOIUrl":"https://doi.org/10.1109/RCAR47638.2019.9043973","url":null,"abstract":"Automatic localization of esophageal tumors is an important part of target volume planning in radiotherapy. Currently, the main localization method is manual localization. Traditional manual positioning is time-consuming and inaccurate for the following reasons. First of all, esophageal neoplasms are irregular in shape. The second, the tumor image was insufficiently contrasted with the surrounding tissue. Also, the tumor area is highly heterogeneous. To solve these problems, this paper proposes an automatic positioning framework combining single point multi-box detector (SSD) with the optimized VGG16 deep learning network. The optimized algorithm network has achieved good results in our esophageal tumor localization experiment. The experimental data consists of 96 esophageal VMAT plans and training set consists of 60 patients, the remaining 36 patient data sets were used as the test set. We trained with 5000 slices and tested with 1000. The experiment result showed the tumor areas of 820 CT slices were effectively located, and the accuracy rate of intersection greater than and (IoU)[6] value was 82%. These promising results suggest that the target area of esophageal tumor can be well located in our optimized framework, which can improve the efficiency and quality of plan making of esophageal tumor radiotherapy.","PeriodicalId":314270,"journal":{"name":"2019 IEEE International Conference on Real-time Computing and Robotics (RCAR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129121811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Design of a Cable Driven Floating Robotic Arm with Continuum Joints","authors":"Zhonghao Wu, Marco Cederle, E. Giani, Kai Xu","doi":"10.1109/RCAR47638.2019.9044130","DOIUrl":"https://doi.org/10.1109/RCAR47638.2019.9044130","url":null,"abstract":"Long-reach manipulator shows potentials in inspection, search and rescue. However, the reach of such a manipulator is often limited, due to fact that the distal structures become payloads of the proximal joints. This research hence focuses on a proof-of-concept study of a slim long-reach robotic arm designed with continuum joints and floating links. A float link and a continuum joint constitute a module that is weightless due to buoyancy. The reach hence becomes unlimited in theory. The actuation of each joint is decoupled via a transmission arrangement, providing a simple kinematic model no matter how many robotic modules are used. Each floating link is composed of a from-the-shelf helium-filled Mylar balloon that is caged by acrylic rings. Each of the two-degree-of-freedom continuum joints is made from a super-elastic nitinol (nickel-titanium alloy) rod and actuated by three cables pulled by three stepper motors. Preliminary experimental results on this constructed 3-meter prototype show that the floating robotic arm can move with acceptable accuracy in still air, validating the proposed concept.","PeriodicalId":314270,"journal":{"name":"2019 IEEE International Conference on Real-time Computing and Robotics (RCAR)","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121196550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Mobility Characteristics Analysis of a Dual-Continuum-Joint Translator","authors":"Lingyun Zeng, Xu Liu, Yang Zheng, Kai Xu","doi":"10.1109/rcar47638.2019.9044157","DOIUrl":"https://doi.org/10.1109/rcar47638.2019.9044157","url":null,"abstract":"A design library of continuum mechanisms, which consists of various mechanism modules with different motion characteristics, can greatly facilitate design tasks. A recent addition to this design library is a dual-continuum-joint translator. This translator is composed of two continuum joints and a multi-lumen tube. The two continuum joints are identical in size and coupled by connecting their corresponding backbones inside the multi-lumen tube. This translator was previously used in constructing a continuum delta robot with a parallel structure and three translation DoFs (Degrees of Freedom). Even though the validity of the dual-continuum-joint translator was experimentally demonstrated, a theoretical characterization is still missing. This paper hence presents a mobility characteristics analysis of the dual-continuum-joint translator. Cosserat rod theory and screw theory are used to analyze the mobility characteristics of this translator. First, the deflected shapes and the compliance matrices of the translator are calculated using Cosserat rod theory. Then, the mobility characteristics of the translator is investigated using screw theory by quantifying the primary mobility. Results of the calculated deflected shapes show that under external wrenches exerted on the translator, the two coupled continuum joints have the bending shapes close to circular curves with similar bending angles. The calculated compliance matrices indicate that translation is the translator's primary mobility.","PeriodicalId":314270,"journal":{"name":"2019 IEEE International Conference on Real-time Computing and Robotics (RCAR)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114705709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}