Johann Haselberger, Marcel Pelzer, B. Schick, S. Müller
{"title":"JUPITER – ROS based Vehicle Platform for Autonomous Driving Research","authors":"Johann Haselberger, Marcel Pelzer, B. Schick, S. Müller","doi":"10.1109/ROSE56499.2022.9977434","DOIUrl":"https://doi.org/10.1109/ROSE56499.2022.9977434","url":null,"abstract":"During the development of state-of-the-art driver assistance systems and highly autonomous driving functions, there is a demand for reliable research vehicle platforms that can be used in a variety of applications. Especially for data-driven machine learning approaches, a large amount of measurement data obtained from multimodal sensors is needed. This paper presents a Robot Operating System (ROS) based prototype vehicle that is built on a Porsche Cayenne, which provides a dedicated test environment for autonomous research. To bridge the gap between pure research and actual production vehicles, the platform features near-series placement of sensors and the use of the built-in camera and actuators. Open-source packages and a containerized software architecture make the system reusable and easy to extend in terms of hardware and algorithms. Furthermore, we describe our approach for data recording and long-term persistence.","PeriodicalId":265529,"journal":{"name":"2022 IEEE International Symposium on Robotic and Sensors Environments (ROSE)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133549753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Data-Driven Hardware-in-the-Loop Plant Modeling for Self-Driving Vehicles","authors":"Hannah Grady, Nicholas Nauman, Md. Suruz Miah","doi":"10.1109/ROSE56499.2022.9977411","DOIUrl":"https://doi.org/10.1109/ROSE56499.2022.9977411","url":null,"abstract":"In this paper, we present data-driven hardware-in-the-loop (HIL) plant models of different subsystems of a self-driving vehicle. Despite numerous concerns, the automotive industry is still investing remarkable resources into the production of self-driving vehicles. Among the challenges in the development process of such vehicles are the validation and testing process of various subsystems. Here we provide data-driven models of different subsystems so that the automotive industry can validate and test autonomous vehicles without the need of a physical vehicle, which would reduce the considerable amount of cost to the automotive industry. The vehicle subsystems considered in this work include the steering, acceleration, brake, shift, speed, and speed control subsystems. Each of these subsystems is either a multi-input single output or single-input single output system. A Lexus RX450H self-driving vehicle is employed to collect raw data (inputs and outputs data for different subsystems) offline. We used the deep learning toolbox available in the commercial software package, MATLAB/SIMULINK, for modeling each of these systems. The contribution of this paper is twofold. First, collecting real time raw data from a physical Lexus RX450H vehicle and using it to develop machine learning models to represent the vehicle subsystems. Second, subsystem models created using machine learning tools for the Lexus vehicle are tested using Hardware-in-the-Loop. Therefore, the results of such modeling could be used for validation and testing without the need for a physical self-driving vehicle. The proposed modeling results could be useful for reducing the cost of the vehicle development process, since a physical vehicle is not required for validation and testing.","PeriodicalId":265529,"journal":{"name":"2022 IEEE International Symposium on Robotic and Sensors Environments (ROSE)","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116451897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kodai Oyake, Goragod Pongthanisorn, Aya Shirai, S. Kaneko, G. Capi
{"title":"Vision based sidewalk recognition for walking assistive robot in outdoor environments","authors":"Kodai Oyake, Goragod Pongthanisorn, Aya Shirai, S. Kaneko, G. Capi","doi":"10.1109/ROSE56499.2022.9977414","DOIUrl":"https://doi.org/10.1109/ROSE56499.2022.9977414","url":null,"abstract":"Robot assisting elderly people to walk in outdoor environments is important especially in aging societies. Robot outdoor navigation is really challenging due to the large variety of environments and situations. For safe robot navigation, the robots must recognize the sidewalks while avoiding pedestrians and obstacles. In this paper, we propose a deep learning method for sidewalk recognition. We trained two Deep Networks algorithms Yolo and YOLACT and compare the performance for sidewalk recognition. The trained Deep Neural Networks are implemented to control the walking assistive robot developed in our laboratory navigate in outdoor environments. The real robot implementation of trained Deep Networks shows a good performance.","PeriodicalId":265529,"journal":{"name":"2022 IEEE International Symposium on Robotic and Sensors Environments (ROSE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129617154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Velayudhan, Abdelfatah Hassan Ahmed, Taimur Hassan, Bennamoun, E. Damiani, N. Werghi
{"title":"Transformers for Imbalanced Baggage Threat Recognition","authors":"D. Velayudhan, Abdelfatah Hassan Ahmed, Taimur Hassan, Bennamoun, E. Damiani, N. Werghi","doi":"10.1109/ROSE56499.2022.9977427","DOIUrl":"https://doi.org/10.1109/ROSE56499.2022.9977427","url":null,"abstract":"Baggage screening for identifying concealed threat items has become inevitable for maintaining public security at high-risk locations, including airports and border checkpoints. However, manual screening needs both expertise and experience, in addition to being cumbersome and prone to errors, encouraging researchers to invest in developing autonomous baggage screening systems. However, these approaches based on CNNs prioritize localized interactions due to their solid inductive bias, restricting their ability to model object-level and image-wide context. Hence, in this paper, we explore Transformers for baggage threat recognition to exploit their ability to model global features to capture concealed threat items within cluttered and tightly packed baggage scans and thereby learn enhanced representations to identify the abnormal scans. Further, the property of visual transformers to prioritize shape over textural information render them a suitable candidate for threat recognition from baggage scans since they lack texture and have low contrast. We also explore the potential of visual transformers in heavily imbalanced settings. Further, we have also implemented a weakly supervised localization approach to identify the input regions contributing to the abnormality classification. The proposed approach surpasses the state-of-art methods achieving 0.979 on Compass-XP, and 0.873 on SIXray, in terms of F1 score.","PeriodicalId":265529,"journal":{"name":"2022 IEEE International Symposium on Robotic and Sensors Environments (ROSE)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132420153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dania Shafique, M. Akram, Taimur Hassan, Tahira Anwar, A. A. Salam
{"title":"Dilated Convolution and Residual Network based Convolutional Neural Network for Recognition of Disastrous Events","authors":"Dania Shafique, M. Akram, Taimur Hassan, Tahira Anwar, A. A. Salam","doi":"10.1109/ROSE56499.2022.9977424","DOIUrl":"https://doi.org/10.1109/ROSE56499.2022.9977424","url":null,"abstract":"Natural disasters include earthquakes, land sliding, floods, and typhoons causing great damage to manmade structures. To efficiently manage natural disasters, it is important to develop an automatic disaster recognition system based on deep learning algorithms such as Convolutional Neural Network. This research aims to introduce the application of Deep learning algorithms in the recognition of different disasters such as building collapse, and burning buildings caused by earthquakes and fire respectively. In this research, a novel approach using a single deep convolution neural network is implemented based on two main characteristics e.g. dilated convolution in which convolution is applied on an input image using defined gaps to capture more contextual information and fine details, and residual connection in which the input layer is not only connected to adjacent layer but maybe the summation of previous layers to reduce the problem of vanishing gradient. Dilated Residual Network is trained and tested on publicly available datasets of disasters that are NWPU-RESICS45, BOWFire, and Satellite image of Hurricane Damage, and Accident Image Analysis dataset that achieved testing accuracy of 92.06%, 76%, 98.15%, and 93.16% respectively. As there is a lack of a single disastrous event dataset so images of disasters were collected using different web scraping tools. These tools allow downloading images in bulk. After discarding irrelevant images disastrous event dataset consists of four classes having 10,000 images each. DRN was applied to the disastrous event dataset and the result showed model achieved 95.67% testing accuracy. The results proved that the proposed methodology is efficient enough and can be generalized for other disaster classification problems.","PeriodicalId":265529,"journal":{"name":"2022 IEEE International Symposium on Robotic and Sensors Environments (ROSE)","volume":"178 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120979285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
L. Santoro, M. Nardello, Marco Calliari, William Cechin Guarienti
{"title":"Catch-Me-If-You-Can Infrastructure-less UWB-based Leader-Follower System for Compact UAVs","authors":"L. Santoro, M. Nardello, Marco Calliari, William Cechin Guarienti","doi":"10.1109/ROSE56499.2022.9977417","DOIUrl":"https://doi.org/10.1109/ROSE56499.2022.9977417","url":null,"abstract":"Thanks to the development of compact and microsized Unmanned Aerial Vehicles (UAVs), their use in dynamic and complex environments is becoming increasingly common. Many are the fields where UAVs can be exploited, from disaster rescue to logistics transportation to precision agriculture. This paper presents a Leader-Follower application for Human-Robot interaction. The system is based on ranging measurements and exploits only low-cost UWB radio avoiding complex and expensive vision-based systems. The system performance was assessed using HIL simulations and outdoor tests. The results show good accuracy and robustness of the tracking system and acceptable error levels while always ensuring the right level of safety for the human involved.","PeriodicalId":265529,"journal":{"name":"2022 IEEE International Symposium on Robotic and Sensors Environments (ROSE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130082811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Edhah, Abeer Awadallah, Mayar Madboly, Hamdihun Dawed, N. Werghi
{"title":"Image Classification and Text Identification in Inspecting Military Aircrafts Logos: Application of Convolutional Neural Network","authors":"S. Edhah, Abeer Awadallah, Mayar Madboly, Hamdihun Dawed, N. Werghi","doi":"10.1109/ROSE56499.2022.9977418","DOIUrl":"https://doi.org/10.1109/ROSE56499.2022.9977418","url":null,"abstract":"Object detection and inspection using images or videos have been receiving increased attention in many applications such as traffic control, brand monitoring, trademark compliance, and product authentication. A particular application that is currently a topic of interest is aircraft logo detection, which aims at automating the visual inspection carried out manually by aircraft engineers. Aircraft logos should meet a large set of requirements that include geometric constraints on the logo elements and patterns, and constraints on the position and orientation with respect to specific references. This work considers the design of a high accuracy convolutional neural network to detect and classify aircraft logos as either adequate or inadequate based on specified criteria. The performance of the developed network is compared to a number of classical machine learning algorithms to demonstrate its effectiveness. Adequate logos are then processed further by extracting them from a frame using robust features extraction algorithm and determining their orientation angle with respect to the horizontal reference axis. Afterward, a text detection technique using a character region awareness for text detection algorithm implemented on a pre-trained network is carried out, along with optical character recognition tool to detect and extract the text from the logos for further processing in other applications. The developed network is tested on actual aircraft logos, captured from the field, where satisfactory results are obtained.","PeriodicalId":265529,"journal":{"name":"2022 IEEE International Symposium on Robotic and Sensors Environments (ROSE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133995504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Image-based Obstacle Avoidance using 3DConv Network for Rocky Environment","authors":"Abderrahmene Boudiaf, A. Sumaiti, J. Dias","doi":"10.1109/ROSE56499.2022.9977423","DOIUrl":"https://doi.org/10.1109/ROSE56499.2022.9977423","url":null,"abstract":"Autonomous navigation systems are an essential part of Unmanned Ground Vehicles (UGVs) since they allow for navigating without supervision in conditions where communication is not available or the existence of high delays which prevent direct communication. One fundamental part of autonomous navigation is obstacle avoidance. Typical approaches utilize some form of distance measuring-based sensors like LIDAR or SONAR. However, such devices have a relatively higher cost in comparison to conventional RGB cameras in addition to introducing complexity in data processing which results in an increase in computational cost and power consumption. In this work, we use sequential RGB data and a Conv3d-based network to create a real-time obstacle avoidance system with high accuracy, low latency, and low processing cost. For training, we used Unreal Engine 4 based simulator to collect a dataset to train the network. Testing the system in a simulated environment using the same simulator showed the ability of the network to avoid obstacles in a realistic environment where rocks of different sizes and shapes were used. Future work can include improving in terms of performance and processing time as well as implementing the network with a real word working prototype and comparing the simulated results with actual performance.","PeriodicalId":265529,"journal":{"name":"2022 IEEE International Symposium on Robotic and Sensors Environments (ROSE)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126919562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Learning-Based Approach for Bias Elimination in Low-Cost Gyroscopes","authors":"Daniel Engelsman, I. Klein","doi":"10.1109/ROSE56499.2022.9977422","DOIUrl":"https://doi.org/10.1109/ROSE56499.2022.9977422","url":null,"abstract":"Modern sensors play a pivotal role in many operating platforms, as they manage to track the platform dynamics at a relatively low manufacturing costs. Their widespread use can be found starting from autonomous vehicles, through tactical platforms, and ending with household appliances in daily use. Upon leaving the factory, the calibrated sensor starts accumulating different error sources which slowly wear out its precision and reliability. To that end, periodic calibration is needed, to restore intrinsic parameters and realign its readings with the ground truth. While extensive analytic methods exist in the literature, little is proposed using data-driven techniques and their unprecedented approximation capabilities. In this study, we show how bias elimination in low-cost gyroscopes can be performed in considerably shorter operative time, using a unique convolutional neural network structure. The strict constraints of traditional methods are replaced by a learning-based regression which spares the time-consuming averaging time, exhibiting efficient sifting of background noise from the actual bias.","PeriodicalId":265529,"journal":{"name":"2022 IEEE International Symposium on Robotic and Sensors Environments (ROSE)","volume":"130 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129786102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. K. Routray, A. Kanade, P. Pounds, Manivannan Muniyandi
{"title":"Towards Multidimensional Textural Perception and Classification Through Whisker","authors":"P. K. Routray, A. Kanade, P. Pounds, Manivannan Muniyandi","doi":"10.1109/ROSE56499.2022.9977409","DOIUrl":"https://doi.org/10.1109/ROSE56499.2022.9977409","url":null,"abstract":"Texture-based studies and designs have been in focus recently. Whisker-based multidimensional surface texture data is missing in the literature. This data is critical for robotics and machine perception algorithms in the classification and regression of textural surfaces. In this study, we present a novel sensor design to acquire multidimensional texture information. The surface texture's roughness and hardness were measured experimentally using sweeping and dabbing. Three machine learning models (SVM, RF, and MLP) showed excellent classification accuracy for the roughness and hardness of surface textures. We show that the combination of pressure and accelerometer data, collected from a standard machined specimen using the whisker sensor, improves classification accuracy. Further, we experimentally validate that the sensor can classify texture with roughness depths as low as $2.5mu m$ at an accuracy of 90% or more and segregate materials based on their roughness and hardness. We present a novel metric to consider while designing a whisker sensor to guarantee the quality of texture data acquisition beforehand. The machine learning model performance was validated against the data collected from the laser sensor from the same set of surface textures. As part of our work, we are releasing two-dimensional texture data: roughness and hardness to the research community.","PeriodicalId":265529,"journal":{"name":"2022 IEEE International Symposium on Robotic and Sensors Environments (ROSE)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130796504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}