Jun Yu Li, Yiyao Zhu, Langcheng Huo, Yongquan Chen
{"title":"Sample-efficient learning of soft priorities for safe control with constrained Bayesian optimization","authors":"Jun Yu Li, Yiyao Zhu, Langcheng Huo, Yongquan Chen","doi":"10.1109/IRC.2020.00069","DOIUrl":"https://doi.org/10.1109/IRC.2020.00069","url":null,"abstract":"A complex motion can be achieved by executing multiple tasks simultaneously, where the key is tuning the task priorities. Generally, task priorities are predefined manually. In order to generate task priorities automatically, different frameworks have been proposed. In this paper, we employed a black-box optimization method, i.e. a variant of constrained Bayesian optimization to learn the soft task priorities, guaranteeing that the robot motion is optimized with high efficiency and no constraints violations occur during the whole learning process.","PeriodicalId":232817,"journal":{"name":"2020 Fourth IEEE International Conference on Robotic Computing (IRC)","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123323634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pascal Becker, N. Spielbauer, A. Rönnau, R. Dillmann
{"title":"Real-Time In-Situ Process Error Detection in Additive Manufacturing","authors":"Pascal Becker, N. Spielbauer, A. Rönnau, R. Dillmann","doi":"10.1109/IRC.2020.00077","DOIUrl":"https://doi.org/10.1109/IRC.2020.00077","url":null,"abstract":"The economic importance of additive manufacturing utilizing Fused Deposition Modeling (FDM) 3D-printers has been on the rise since key patents on crucial parts of the technology ran out in the early 2000s. Altough there have been major improvements in the materials and print quality of the printers used, the process is still prone towards various errors. At the same time almost none of the printers available use build in sensors to detect errors and react to their occurrence. This work outlines a monitoring system for FDM 3D-printers that is able to detect a multitude of severe and common errors through the use of optical consumer sensors. The system is able to detect layer shifts and stopped extrusion with a high accuracy. Furthermore additional sensors and error detection methods can be easily integrated through the modular structure of the presented system. To be able to handle multiple printer without the same amount of sensors, the sensor was added to the tool center point (TCP) of a robot.","PeriodicalId":232817,"journal":{"name":"2020 Fourth IEEE International Conference on Robotic Computing (IRC)","volume":"160 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122041758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Intelligent Power Wheelchair Upgrade Kit","authors":"Jesse Leaman, H. La","doi":"10.1109/IRC.2020.00074","DOIUrl":"https://doi.org/10.1109/IRC.2020.00074","url":null,"abstract":"This paper presents an update on the research and development of the Intelligent Power Wheelchair (IPW) upgrade kit. First, we review the last three years of trials, then we describe the improvements proposed for the next prototype. The 2020 IPW edition is designed to be more light weight, modular, and multi-functional than it's predecessor. The assembly process has been streamlined, and there is now greater ability to tailor the system to the needs of the individual user. The brand new, self-leveling scanner has broader applications as an aid for people with visual impairment. Keywords: Smart Wheelchair, Human Trials","PeriodicalId":232817,"journal":{"name":"2020 Fourth IEEE International Conference on Robotic Computing (IRC)","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125839707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Detection of Parcel Boxes for Pallet Unloading Using a 3D Time-of-Flight Industrial Sensor","authors":"Riccardo Monica, J. Aleotti, Dario Lodi Rizzini","doi":"10.1109/IRC.2020.00057","DOIUrl":"https://doi.org/10.1109/IRC.2020.00057","url":null,"abstract":"This work presents a 3D vision system for automatic detection of cardboard parcel boxes of known size, located on the top layer of a pallet of known height. An industrial Time-of-Flight (ToF) sensor is adopted that can operate in different illumination conditions thanks to the use of an infrared light source. The perception system is intended for application in industrial warehousing for end-of-line operations, like robot depalletizing. The proposed method does not assume any predefined layout of parcels and, therefore, it can work even with an incomplete layer of misaligned cardboard boxes. The developed solution first extracts all possible object hypotheses, then an optimization problem is solved, based on a genetic algorithm, to exclude conflicts. Experiments have been performed on a real dataset including complex configurations of tightly packed parcels.","PeriodicalId":232817,"journal":{"name":"2020 Fourth IEEE International Conference on Robotic Computing (IRC)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127769618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tauhidul Alam, L. Gandy, Leonardo Bobadilla, Ryan N. Smith
{"title":"Synergistic AUV Navigation through Deployed Surface Buoys","authors":"Tauhidul Alam, L. Gandy, Leonardo Bobadilla, Ryan N. Smith","doi":"10.1109/IRC.2020.00020","DOIUrl":"https://doi.org/10.1109/IRC.2020.00020","url":null,"abstract":"In this paper, we present a navigation method for an Autonomous Underwater Vehicle (AUV) in an underwater environment making use of a deployed set of static water surface platforms called buoys on the environment. Our method has the following steps: 1) Communication regions of buoys are computed from their communication capabilities; 2) A set of feasible paths through buoys between given initial and goal locations is calculated using the preimages of the buoys' communication regions; 3) An AUV navigation path that utilizes the least number of buoys for state estimation is chosen from the calculated feasible paths. Through extensive simulations, we validated our method which demonstrates its applicability.","PeriodicalId":232817,"journal":{"name":"2020 Fourth IEEE International Conference on Robotic Computing (IRC)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127966533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Verucchi, Luca Bartoli, Fabio Bagni, Francesco Gatti, P. Burgio, M. Bertogna
{"title":"Real-Time clustering and LiDAR-camera fusion on embedded platforms for self-driving cars","authors":"M. Verucchi, Luca Bartoli, Fabio Bagni, Francesco Gatti, P. Burgio, M. Bertogna","doi":"10.1109/IRC.2020.00068","DOIUrl":"https://doi.org/10.1109/IRC.2020.00068","url":null,"abstract":"3D object detection and classification are crucial tasks for perception in Autonomous Driving (AD). To promptly and correctly react to environment changes and avoid hazards, it is of paramount importance to perform those operations with high accuracy and in real-time. One of the most widely adopted strategies to improve the detection precision is to fuse information from different sensors, like e.g. cameras and LiDAR. However, sensor fusion is a computationally intensive task, that may be difficult to execute in real-time on an embedded platforms. In this paper, we present a new approach for LiDAR and camera fusion, that can be suitable to execute within the tight timing requirements of an autonomous driving system. The proposed method is based on a new clustering algorithm developed for the LiDAR point cloud, a new technique for the alignment of the sensors, and an optimization of the Yolo-v3 neural network. The efficiency of the proposed method is validated comparing it against state-of-the-art solutions on commercial embedded platforms.","PeriodicalId":232817,"journal":{"name":"2020 Fourth IEEE International Conference on Robotic Computing (IRC)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128222029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bing-Xian Lu, Ji-Jie Wu, Yu-Chung Tsai, Wan-Ting Jiang, K. Tseng
{"title":"A Novel Telerobotic Search System using an Unmanned Aerial Vehicle","authors":"Bing-Xian Lu, Ji-Jie Wu, Yu-Chung Tsai, Wan-Ting Jiang, K. Tseng","doi":"10.1109/IRC.2020.00030","DOIUrl":"https://doi.org/10.1109/IRC.2020.00030","url":null,"abstract":"Due to the agile mobility of unmanned aerial vehicles (UAVs), UAVs become potential robotic platforms for search and rescue applications. Since making a decision (e.g., identify victims or termination of a mission) in search and rescue missions is difficult, most of robotic search and rescue systems rely on teleoperation. This research proposes a novel telerobotic search system consisting of a monitor, joystick and eye tracker and drone with a RGBD camera. The experiments demonstrate that (1) human pilots can search for victims efficiently; and (2) the collected data is the rosbag format, which can be further used for analyzing humans' search behavior and gaze data.","PeriodicalId":232817,"journal":{"name":"2020 Fourth IEEE International Conference on Robotic Computing (IRC)","volume":"62 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131775006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Built-In 360 Degree Separation Monitoring for Grippers on Robotic Manipulators in Human-Robot Collaboration","authors":"Urban B. Himmelsbach, T. Wendt","doi":"10.1109/IRC.2020.00031","DOIUrl":"https://doi.org/10.1109/IRC.2020.00031","url":null,"abstract":"Efficient collaborative robotic applications need a combination of speed and separation monitoring, and power and force limiting operations. While most collaborative robots have built-in sensors for power and force limiting operations, there are none with built-in sensor systems for speed and separation monitoring. This paper proposes a system for speed and separation monitoring directly from the gripper of the robot. It can monitor separation distances of up to three meters. We used single-pixel Time-of-Flight sensors to measure the separation distance between the gripper and the next obstacle perpendicular to it. This is the first system capable of measuring separation distances of up to three meters directly from the robot's gripper.","PeriodicalId":232817,"journal":{"name":"2020 Fourth IEEE International Conference on Robotic Computing (IRC)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131854168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Scalable Visual Representation of Sensor-Based, Nested Robot Programs","authors":"M. Riedl, D. Henrich","doi":"10.1109/IRC.2020.00044","DOIUrl":"https://doi.org/10.1109/IRC.2020.00044","url":null,"abstract":"An easy to understand graphical representation for nested robot programs is important to allow also non-experts, who are usually unable to read source code, to understand, what a robot program is meant to do. In this paper we present a Backus-Naur form for describing sensor-based nested robot programs with a corresponding visual representation of these robot programs in the form of control flow along a timeline. To achieve this, we utilize a scalable layered representation to visualize the components of the Backus-Naur form. In addition to that, we add temporal information to the visual representation, so that the users are able to see the execution time of different parts of the program and of the program as a whole. The result of this work are two equivalent, equally powerful representations of robot programs, namely the textual representation in form of the Backus-Naur form and the scalable layered visual representation. Both can be converted into each other without loss. The evaluation shows, that the visual representation is easy to understand for non-experts and therefore suitable to use within the context of intuitive robot programming.","PeriodicalId":232817,"journal":{"name":"2020 Fourth IEEE International Conference on Robotic Computing (IRC)","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122603467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Message from the IRC 2020 Program Co-chairs","authors":"","doi":"10.1109/irc.2020.00006","DOIUrl":"https://doi.org/10.1109/irc.2020.00006","url":null,"abstract":"","PeriodicalId":232817,"journal":{"name":"2020 Fourth IEEE International Conference on Robotic Computing (IRC)","volume":"23 Suppl 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124568393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}