Valentin Ameres, Meriem Chetmi, Lucas Artmann, Tim C. Lueth
{"title":"Additively Manufactured Primitive Plastic Phantom for Calibration of Low-Resolution Computed Tomography Cone Beam Scanner for Additive Creation of 3D Copies using Inverse Radon Transform","authors":"Valentin Ameres, Meriem Chetmi, Lucas Artmann, Tim C. Lueth","doi":"10.1109/ROBIO55434.2022.10011777","DOIUrl":"https://doi.org/10.1109/ROBIO55434.2022.10011777","url":null,"abstract":"Computed Tomography (CT) and 3D reconstruction contribute significantly to reverse engineering as well as to additive manufacturing. Utilizing CT scans, surface information as well as inner details of objects of interest can be recorded non-destructively. In this work, a low-resolution computed tomography cone beam (CBCT) scanner was used to scan, reconstruct and print plastic components in order to create 3D copies. Software based calibration using an additively manufactured two layer plastic phantom containing steel ball bearings was used to detect and correct geometrical alignment errors and improve reconstruction quality. A phantom was designed to be printed additively and assembled without the help of further tools, with an axial connection to the CBCT. Corrections were applied to the two-dimensional 300x300 pixel X-ray projections before reconstruction. A reconstructed volume of 212x212x212 voxels was achieved using either the inverse-Radon-Transformation-based Feldkamp Davis Krauss (FDK) or Simultaneous Algebraic Reconstruction Technique (SART) algorithm. In an experiment, a plastic phantom was fabricated and used for misalignment correction. Two reconstructions of uncorrected and corrected projections of a 30 mm plastic cube with center bore were subsequently compared to each other in terms of density. The cube reconstructed from corrected projections had higher voxel density values and sharper slices, showing the successful fabrication and use of the plastic phantom.","PeriodicalId":151112,"journal":{"name":"2022 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130600310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lei Zhang, Kaixin Bai, Zhaopeng Chen, Yunlei Shi, Jianwei Zhang
{"title":"Towards Precise Model-free Robotic Grasping with Sim-to-Real Transfer Learning","authors":"Lei Zhang, Kaixin Bai, Zhaopeng Chen, Yunlei Shi, Jianwei Zhang","doi":"10.1109/ROBIO55434.2022.10011794","DOIUrl":"https://doi.org/10.1109/ROBIO55434.2022.10011794","url":null,"abstract":"Precise robotic grasping of several novel objects is a huge challenge in manufacturing, automation, and logistics. Most of the current methods for model-free grasping are disadvantaged by the sparse data in grasping datasets and by errors in sensor data and contact models. This study combines data generation and sim - to- real transfer learning in a grasping framework that reduces the sim-to-real gap and enables precise and reliable model-free grasping. A large-scale robotic grasping dataset with dense grasp labels is generated using domain randomization methods and a novel data augmentation method for deep learning-based robotic grasping to solve data sparse problem. We present an end-to-end robotic grasping network with a grasp optimizer. The grasp policies are trained with sim-to-real transfer learning. The presented results suggest that our grasping framework reduces the uncertainties in grasping datasets, sensor data, and contact models. In physical robotic experiments, our grasping framework grasped single known objects and novel complex-shaped household objects with a success rate of 90.91%. In a complex scenario with multi-objects robotic grasping, the success rate was 85.71%. The proposed grasping framework outperformed two state-of-the-art methods in both known and unknown object robotic grasping.","PeriodicalId":151112,"journal":{"name":"2022 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121652778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chenglin Yang, Zihao Chai, Xiaoxiao Yang, Hanyang Zhuang, Ming Yang
{"title":"Recognition of Degradation Scenarios for LiDAR SLAM Applications","authors":"Chenglin Yang, Zihao Chai, Xiaoxiao Yang, Hanyang Zhuang, Ming Yang","doi":"10.1109/ROBIO55434.2022.10011727","DOIUrl":"https://doi.org/10.1109/ROBIO55434.2022.10011727","url":null,"abstract":"The SLAM system, which uses 3D LiDAR as the only sensor, is prone to degradation when facing a scenario with sparse structure and fewer constraints. It cannot solve the robot pose based on limited LiDAR constraint information, which leads to the localization failure and mapping failure of the SLAM system. Due to the limitations of LiDAR, it is difficult to only rely on the point cloud data provided by LiDAR to solve the problem of localization and mapping of degraded scenarios. Currently, the mainstream is to provide additional information through multi-sensor fusion and other schemes to restrict and correct the system's attitude. In the multi-source fusion system, it is still essential to determine the information reliability of each sensor source in different directions. Hence, the recognition of the degradation scenario has significant research value. In this paper, three schemes, geometric information, constraint distur-bance, and residual disturbance, are designed to quantitatively identify the degradation state of the system and estimate the degradation direction. Through experimental verification, the proposed schemes have a favorable recognition effect in the degradation scenario of the simulation environment and real environment.","PeriodicalId":151112,"journal":{"name":"2022 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121142959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Experimental Study of Keypoint Descriptor Fusion","authors":"Yaling Pan, Li He, Y. Guan, Hong Zhang","doi":"10.1109/ROBIO55434.2022.10011825","DOIUrl":"https://doi.org/10.1109/ROBIO55434.2022.10011825","url":null,"abstract":"Local feature descriptors play a crucial role in computer vision problems, especially robot motion. Existing descriptors are highly accurate, but their performance de-pends on the influence of distracting factors, such as illumi-nation and viewpoint. There is room for further improvement of these descriptors. In this paper, we provide an in-depth analysis of several exciting features of the descriptor fusion model (DFM) we have proposed in our recent work, which uses an autoencoder to combine descriptors and exploit their respective advantages. With this DFM framework, we fur-ther validate that fused descriptors can retain advantageous properties and that our DFM is a generally applicable method with respect to various component descriptors. Specifically, we evaluate multiple combinations of hand-crafted and CNN descriptors concerning their performance on a benchmark dataset with illumination and viewpoint changes to obtain comprehensive experimental results. The results show that the fused descriptors have better matching accuracy than their component descriptors.","PeriodicalId":151112,"journal":{"name":"2022 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121306997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Target prediction and temporal localization of grasping action for vision-assisted prosthetic hand","authors":"Xu Shi, Wei Xu, Weichao Guo, X. Sheng","doi":"10.1109/ROBIO55434.2022.10011751","DOIUrl":"https://doi.org/10.1109/ROBIO55434.2022.10011751","url":null,"abstract":"With the development of shared control technology for humanoid prosthetic hands, more and more research is focused on vision-based machine decision making. In this paper, we propose a miniaturized eye-in-hand target object prediction and action decision-making framework for the humanoid hand “approach-grasp” sequence. Our prediction system can simultaneously predict the target object and detect temporal localization of the grasp action. The system is divided into three main modules: feature logging, target filtering and grasp triggering. In this paper, the optimal configuration of the hyper-parameters designed in each module is performed experimentally. We also propose a prediction quality assessment method for “approach-grasp” behavior, including instance level, sequence level and action decision level. With the optimal hyper-parameter configuration, the predicting system perform averagely to 0.854 at instance prediction accuracy (IP), 0.643 at grasp action prediction accuracy (GP). It also has good predictive stability for most classes of objects with number of predicting changes (NPC) below 6.","PeriodicalId":151112,"journal":{"name":"2022 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122313511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tong Shen, Tianqi Zhang, Kai Yuan, Kaiwen Xue, Huihuan Qian
{"title":"A Predictive Method for Site Selection in Aquaculture with a Robotic Platform","authors":"Tong Shen, Tianqi Zhang, Kai Yuan, Kaiwen Xue, Huihuan Qian","doi":"10.1109/ROBIO55434.2022.10011913","DOIUrl":"https://doi.org/10.1109/ROBIO55434.2022.10011913","url":null,"abstract":"The aquaculture industry significantly impacts human life and social development since it provides excellent resources and continues to grow for our needs. To improve production efficiency and minimize risk, suitable site selection in aquaculture tends to be more desirable. This paper proposes a predictive method based on the environmental sampling information to justify the site condition for aquaculture. A robotic platform is designed to automatically patrol the water body with sensors sampling the environment information to achieve the above-mentioned accomplishment. Based on the obtained data, a machine learning model is trained and further used to assess the probability. Finally, potential sites could be selected for the future aquaculture industry. Both the predictive method and the robotic platform have been tested in an outdoor lake, and the results verified their feasibility. Both the platform and the prediction method could be applied to increase the site selection efficiency, thus promoting the aquaculture industry's development.","PeriodicalId":151112,"journal":{"name":"2022 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125850070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zahraa Awad, Celine Chibani, Noel Maalouf, Imad H. Elhajjl
{"title":"Human-Aided Online Terrain Classification for Bipedal Robots Using Augmented Reality","authors":"Zahraa Awad, Celine Chibani, Noel Maalouf, Imad H. Elhajjl","doi":"10.1109/ROBIO55434.2022.10011705","DOIUrl":"https://doi.org/10.1109/ROBIO55434.2022.10011705","url":null,"abstract":"This paper presents an online training system, enhanced with augmented reality, for improving real-time terrain classification by humanoid robots. The real-time terrain type prediction model relies on data acquired from four different sensors (force, position, current, and inertial) of the NAO humanoid robot. We compare the performance of Stochastic Gradient Descent, Passive Aggressive classifier, and Support Vector Machine in predicting the terrain type being traversed. Then, the models are trained online by manually inputting the correct terrain type being traversed to improve the accuracy of the predictions over time. An Augmented Reality (AR) user interface is designed to display the robot diagnostics and terrain type being predicted and obtain the user feedback to correct the terrain type when needed. This allows the user to improve the classification results and enhance the data collection process in the easiest way possible. The experimental results show that the Passive Aggressive classifier is the most successful among the three online classifiers with an accuracy of 81.4%.","PeriodicalId":151112,"journal":{"name":"2022 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126816055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yichun Wu, Qiuyi Gu, Jincheng Yu, Guangjun Ge, Jian Wang, Q. Liao, Chun Zhang, Yu Wang
{"title":"MR-GMMExplore: Multi-Robot Exploration System in Unknown Environments based on Gaussian Mixture Model","authors":"Yichun Wu, Qiuyi Gu, Jincheng Yu, Guangjun Ge, Jian Wang, Q. Liao, Chun Zhang, Yu Wang","doi":"10.1109/ROBIO55434.2022.10011789","DOIUrl":"https://doi.org/10.1109/ROBIO55434.2022.10011789","url":null,"abstract":"Collaborative exploration in an unknown environ-ment is an essential task for mobile robotic systems. Without external positioning, multi-robot mapping methods have relied on the transfer of place descriptors and sensor data for relative pose estimation, which is not feasible in communication-limited environments. In addition, existing frontier-based exploration strategies are mostly designed for occupancy grid maps, thus failing to use surface information of obstacles in complex three-dimensional scenes. To address these limitations, we utilize Gaussian Mixture Model (GMM) as the map form for both mapping and exploration. We extend our previous mapping work to exploration setting by introducing MR-GMMExplore, a Multi-Robot GMM-based Exploration system in which robots transfer GMM submaps to reduce data transmission and perform exploration directly using the generated GMM map. Specifically, we propose a GMM spatial information extraction strategy that efficiently extracts obstacle probability information from GMM submaps. Then we present a goal selection method that allows robots to explore different areas, and a GMM-based local planner that realizes local planning using GMM maps instead of converting them into grid maps. Simulation results show that the transmission of GMM submaps reduces approximately 96% communication load compared with point clouds and our mean-based extraction strategy is 4 times faster than the traversal-based one. We also conduct comparative experiments to demonstrate the effectiveness of our approach in reducing backtracking paths and enhancing cooperation. MR-GMMExplore is published as an open-source ROS package at https://github.com/efc-robot/gmm_explore.","PeriodicalId":151112,"journal":{"name":"2022 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121504453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SmallRhex: A Fast and Highly-Mobile Hexapod Robot","authors":"Wenhui Wang, Wujie Shi, Zerui Li, Weiheng Zhuang, Zheng Zhu, Zhenzhong Jia","doi":"10.1109/ROBIO55434.2022.10012013","DOIUrl":"https://doi.org/10.1109/ROBIO55434.2022.10012013","url":null,"abstract":"Relying on the unique C-leg structure, RHex robots have good mobility and traffic ability when having relatively simple structures. Based on the existing RHex robots, taking into account the performance and cost, this design develops a small hexapod robot, named smallRhex, with low cost but strong performance. This paper mainly introduces the mechanical structure, robot gaits, simulation, and physical performance tests of smallRhex robots. The hardware is mainly based on the raspberry pie microcomputer and Robomaster motor and accessories. The high power density meets the dual requirements of performance and cost. Then cooperates with 3D printing and sheet metal and machined parts processing to complete the design of the mechanical structure and assembly of the robot. At the control level, raspberry pie directly controls the movement of 6 motors. The gait design includes basic motion gait, which includes straight walking and turning, and other gaits of complex motion: stair climbing, jumping, and high obstacle climbing. They are simulated in Webots. Finally, the performance test and the gait test of the robot are carried out. Then the gait design is further optimized, and the basic design of smallRhex robot is completed.","PeriodicalId":151112,"journal":{"name":"2022 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127576239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Prospect of Robot Assisted Maxilla-Mandibula-Complex Reposition in Orthognathic Surgery","authors":"Jie Liang, Qianqian Li, Xing Wang, Xiaojing Liu","doi":"10.1109/ROBIO55434.2022.10011845","DOIUrl":"https://doi.org/10.1109/ROBIO55434.2022.10011845","url":null,"abstract":"This paper investigates the feasibility and accuracy of robot assisted maxilla-mandibula-complex (MMC) reposition in orthognathic surgery. A robot system was established by an optical motion capture system and a universal robotic arm. Computer assisted surgical simulation (CASS), image guiding and robotic control software modules were developed according to the specific requirements for orthognathic surgery. The operation work flow includes data acquisition, virtual simulation, registration, osteotomy, robotic assisting bone segments reposition and fixation. The reposition and holding accuracy was tested on skull models. Optical scanner was used to acquire the intraoperative skull morphologies before and after the fixation. A postoperative CT scan was conducted when the fixation is completed. The virtual skull, intraoperative scan data and postoperative CT scan image were superimposed and compared. Error was defined as root mean square (RMS) of MMC on different images. The positioning accuracy was calculated by RMS between surface scan data before fixation and the virtual design skull. The holding accuracy was calculated by RMS between the surface scan before and after fixation. A validation test was conducted on five skull models. The mean accuracy of robotic assisting MMC reposition was 0.75 ±0.69 mm while the mean holding accuracy during the fixation procedure was 1.56±1.2mm. The accuracy of robot assisted MMC reposition was clinical feasible. The holding accuracy during fixation procedure is less satisfactory than that of repositioning. Further investigation is needed to improve the holding solidity of the robotic manipulator.","PeriodicalId":151112,"journal":{"name":"2022 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128933186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}