G. Spampinato, A. Bruna, Davide Giacalone, G. Messina
{"title":"Low Cost Point to Point Navigation System","authors":"G. Spampinato, A. Bruna, Davide Giacalone, G. Messina","doi":"10.1109/ICARA51699.2021.9376545","DOIUrl":"https://doi.org/10.1109/ICARA51699.2021.9376545","url":null,"abstract":"This document describes a novel low-cost methodology called “Towards and Tangent” for point to point navigation in robotics systems. The system navigates in an unknown environment, and it avoids obstacles to reach the desired point, using only laser range sensor, without any integration with mems or another kind of data. It is composed by two simple steps: head toward the goal and select the direction with the lower angle to overcome the obstacle. In this way, compared to the state of the art algorithms, it requires less computational resources, since it does not need to detect obstacles discontinuities and do not need to follow an obstacle's boundaries. Moreover, a simple state machine can handle both obstacle avoidance and point to point navigation. Even if the system is easy to implement and requires low resources, it reaches high performances, in line with more sophisticated algorithms and works very well in real-time.","PeriodicalId":183788,"journal":{"name":"2021 7th International Conference on Automation, Robotics and Applications (ICARA)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116989749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wazha Mmereki, R. Jamisola, Dimane Mpoeleng, Tinao Petso
{"title":"YOLOv3-Based Human Activity Recognition as Viewed from a Moving High-Altitude Aerial Camera","authors":"Wazha Mmereki, R. Jamisola, Dimane Mpoeleng, Tinao Petso","doi":"10.1109/ICARA51699.2021.9376435","DOIUrl":"https://doi.org/10.1109/ICARA51699.2021.9376435","url":null,"abstract":"This paper presents a method to classify human activities as normal or suspicious using YOLOv3 to automatically process video footages taken from a high altitude moving aerial camera, such as the one attached to a drone. We consider four human activities namely, jogging, walking, fighting, and chasing. Objects generally appear much smaller, with less visible features, when viewed from high altitudes. The reduced visible features make automatic human activity detection from ground surveillance cameras not applicable to the high altitude case. Through transfer learning, we modified existing pre-trained YOLOv3 convolutional neural networks (CNN‘s) and retrained with our own high aerial human action dataset. By so doing, we were able to customize YOLOv3 to detect, localize, and recognize aerial human activities in real-time as normal or suspicious. The proposed approach achieves a promising average precision accuracy of 82.30%, and average F1 score of 88.10% on classifying high aerial human activities. We demonstrated that YOLOv3 is a powerful approach and relatively fast for the recognition and localization of human subjects as seen from above.","PeriodicalId":183788,"journal":{"name":"2021 7th International Conference on Automation, Robotics and Applications (ICARA)","volume":"191 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122163599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Novel Haptic Based Guidance Scheme for Swarm of Magnetic Nanoparticles Steering","authors":"Chayabhan Limpabandhu, A. K. Hoshiar","doi":"10.1109/ICARA51699.2021.9376563","DOIUrl":"https://doi.org/10.1109/ICARA51699.2021.9376563","url":null,"abstract":"A wide variety of haptic based teleoperation systems has been introduced for medical applications. Using force feedback in a haptic device is quite helpful for small and delicate medical interventions. This paper presents a haptic based virtual reality environment developed to steer magnetic nanoparticles (MNPs) with a guidance strategy for moving the MNPs to the desired outlet. As a proof of concept, a low-cost 3D printed open-source haptic device is used. The experiments show that the haptic can efficiently steer MNPs in different velocities by magnetic forces. We have studied process (magnetic field) and environmental (fluid velocity, number of particles) parameters with the VR-based haptic system to determine the most influential parameters. The fluid velocity showed to have the highest effect on steering performance. It has been shown that in the high fluid (10 mm/s velocities), only 50% of the particles are steered. We have developed a guidance scheme based on variable forbidden and safe zones to elevate the steering performance. By using the proposed guidance scheme, a 17.5% improvement in the performance has been observed. The promising results showed the potential of this approach in MNP based delivery.","PeriodicalId":183788,"journal":{"name":"2021 7th International Conference on Automation, Robotics and Applications (ICARA)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129610935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Who Controls Your Robot? An Evaluation of ROS Security Mechanisms","authors":"Niklas Goerke, David Timmermann, I. Baumgart","doi":"10.1109/ICARA51699.2021.9376468","DOIUrl":"https://doi.org/10.1109/ICARA51699.2021.9376468","url":null,"abstract":"The Robot Operation System (ROS) is widely used in academia as well as the industry to build custom robot applications. Successful cyberattacks on robots can result in a loss of control for the legitimate operator and thus have a severe impact on safety if the robot is moving uncontrollably. A high level of security thus needs to be mandatory. Neither ROS 1 nor 2 in their default configuration provide protection against network based attackers. Multiple protection mechanisms have been proposed that can be used to overcome this. Unfortunately, it is unclear how effective and usable each of them are. We provide a structured analysis of the requirements these protection mechanisms need to fulfill by identifying realistic, network based attacker models and using those to derive relevant security requirements and other evaluation criteria. Based on these criteria, we analyze the protection mechanisms available and compare them to each other. We find that none of the existing protection mechanisms fulfill all of the security requirements. For both ROS 1 and 2, we discuss which protection mechanism are most relevant and give hints on how to decide on one. We hope that the requirements we identify simplify the development or enhancement of protection mechanisms that cover all aspects of ROS and that our comparison helps robot operators to choose an adequate protection mechanism for their use case.","PeriodicalId":183788,"journal":{"name":"2021 7th International Conference on Automation, Robotics and Applications (ICARA)","volume":"1894 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130045906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Stability Analysis for T-S Fuzzy Semi-Markovian Switching CVNs with Mixed Delays and General Uncertain Transition Rates","authors":"Qiang Li, Jinling Liang","doi":"10.1109/ICARA51699.2021.9376561","DOIUrl":"https://doi.org/10.1109/ICARA51699.2021.9376561","url":null,"abstract":"This paper concerns with the stochastic stability problem for Takagi-Sugeno (T-S) fuzzy semi-Markovian switching complex-valued networks (CVNs) with mixed delays, where the transition rates of the semi-Markovian process are in the general uncertain form which contain two cases: completely unknown or unknown but with known upper/lower bounds. Based on the Lyapunov stability theory and the stochastic analysis technique, several mode-dependent stability criteria are established to guarantee the considered T-S fuzzy CVN to be asymptotically stable in the mean-square sense. Finally, one numerical example is provided to demonstrate feasibility of the obtained theoretical results.","PeriodicalId":183788,"journal":{"name":"2021 7th International Conference on Automation, Robotics and Applications (ICARA)","volume":"146 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122563237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards Next Best View Planning for Time-Variant Scenes","authors":"Embla Morast, P. Jensfelt","doi":"10.1109/ICARA51699.2021.9376559","DOIUrl":"https://doi.org/10.1109/ICARA51699.2021.9376559","url":null,"abstract":"In modelling and reconstruction of objects or scenes, dynamic settings present different challenges than the more well-studied static case. In this work, we explore these challenges by investigating how next best view planning can be adapted for observation of dynamic scenes. We conduct a thorough review of different representations of information based on precedent from the static case, and find that view planning cannot be directly transferred from static environments to dynamic scenes without accounting for information deterioration and observational bias.","PeriodicalId":183788,"journal":{"name":"2021 7th International Conference on Automation, Robotics and Applications (ICARA)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126480342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Image-Based Visual Servoing of Rotationally Invariant Objects Using a U-Net Prediction","authors":"Norbert Mitschke, M. Heizmann","doi":"10.1109/ICARA51699.2021.9376577","DOIUrl":"https://doi.org/10.1109/ICARA51699.2021.9376577","url":null,"abstract":"In this article an image-based visual servoing for the armature of electric motors is presented. For a calibrated monocular eye-in-hand camera system our goal is to move the camera to the desired position with respect to the armature. For this purpose we minimize the error between a corresponding feature vector and a measured feature vector. In this paper we derived various features from the output of a U-Net. The variety leads to the fact that we can decouple the features in the control process. The prediction of the U-Net is stabilized by strong augmentation, an armature model and an adaptive digital zoom. We can show that our U-Net control approach converges and is robust against noise and multiple objects.","PeriodicalId":183788,"journal":{"name":"2021 7th International Conference on Automation, Robotics and Applications (ICARA)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122282246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Spampinato, A. Bruna, I. Guarneri, Davide Giacalone
{"title":"Deep Learning Localization with 2D Range Scanner","authors":"G. Spampinato, A. Bruna, I. Guarneri, Davide Giacalone","doi":"10.1109/ICARA51699.2021.9376424","DOIUrl":"https://doi.org/10.1109/ICARA51699.2021.9376424","url":null,"abstract":"In recent years, the use of 2D laser range scanners is increasing in industrial products, thanks to decreasing cost of this kind of devices and increasing accuracy. Nevertheless, the localization estimation of the moving objects (vehicles, robots, drones and so on) between consecutive laser range scans is still a challenging problem. In this paper, we explore different neural network approaches, using only a 2D laser scanner to address this problem. The proposed neural network shows promising results in terms of average accuracy (about 1cm in translation and 1° in rotation of Mean Absolute Error (MAE)) and in terms of overall used parameters (less than one hundred thousand), being an interesting method that could complement or integrate traditional localization approaches. The proposed neural network processes about 8000 pairs of compacted scans per second on Nvidia Titan X (Pascal) GPU.","PeriodicalId":183788,"journal":{"name":"2021 7th International Conference on Automation, Robotics and Applications (ICARA)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121059806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Improving Deep Multi-modal 3D Object Detection for Autonomous Driving","authors":"Razieh Khamsehashari, K. Schill","doi":"10.1109/ICARA51699.2021.9376453","DOIUrl":"https://doi.org/10.1109/ICARA51699.2021.9376453","url":null,"abstract":"Object detection in real-world applications such as autonomous driving scenarios is a challenging issue since objects often occlude each other. 3D object detection has achieved high accuracy and efficiency, but detecting small object instances and occluded objects are the most challenging issues to deploy detectors in crowded scenes. Our main focus in this paper is deep multi-modal based object detector in an automated driving system with early fusion on 3D object detection utilizing both Light Detection and Ranging (LiDAR) and image data. We aim at obtaining highly accurate 3D localization and recognition of objects in the road scene and try to improve the performance. In this regard, our basic architecture follows an established two-stage architecture, Aggregate View Object Detection-Feature Pyramid Network (AVOD-FPN), one of the best among sensor fusion-based methods. AVOD-FPN has yielded promising results especially for detecting small instances. Moreover, another main challenging issue in autonomous driving is detecting the occluded objects. So we try to address this difficulty by integrating attention network into the multi-modal 3D object detector. Experiments are shown to produce state-of-the-art results on the KITTI 3D sensor fusion-based object detection benchmark.","PeriodicalId":183788,"journal":{"name":"2021 7th International Conference on Automation, Robotics and Applications (ICARA)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131839161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards Pick and Place Multi Robot Coordination Using Multi-agent Deep Reinforcement Learning","authors":"Xi Lan, Yuansong Qiao, Brian Lee","doi":"10.1109/ICARA51699.2021.9376433","DOIUrl":"https://doi.org/10.1109/ICARA51699.2021.9376433","url":null,"abstract":"Recent advances in deep reinforcement learning are enabling the creation and use of powerful multi-agent systems in complex areas such as multi-robot coordination. These show great promise to help solve many of the difficult challenges of rapidly growing domains such as smart manufacturing. In this position paper we describe our early-stage work on the use of multi-agent deep reinforcement learning to optimise coordination in a multi-robot pick and place system. Our goal is to evaluate the feasibility of this new approach in a manufacturing environment. We propose to adopt a decentralised partially observable Markov Decision Process approach and to extend an existing cooperative game work to suitably formulate the problem as a multiagent system. We describe the centralised training/decentralised execution multi-agent learning approach which allows a group of agents to be trained simultaneously but to exercise decentralised control based on their local observations. We identify potential learning algorithms and architectures that we will investigate as a base for our implementation and we outline our open research questions. Finally we identify next steps in our research program.","PeriodicalId":183788,"journal":{"name":"2021 7th International Conference on Automation, Robotics and Applications (ICARA)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132235996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}