Edisson Eder Caballero Ames, Brayan Jesus Leon Quilca, Elvis Nelson Urbano Taipe, Deyby Maycol Huamanchahua Canchanya
{"title":"Control and Monitoring of Thermographic Chambers by Means of PLC and HMI","authors":"Edisson Eder Caballero Ames, Brayan Jesus Leon Quilca, Elvis Nelson Urbano Taipe, Deyby Maycol Huamanchahua Canchanya","doi":"10.1109/RAAI56146.2022.10092968","DOIUrl":"https://doi.org/10.1109/RAAI56146.2022.10092968","url":null,"abstract":"Covid-19 disease affects the individual’s body in different ways. Most of the infected people present various symptoms of complexity. This article develops the design of a system of control and monitoring of people through the use of thermographic cameras, which includes an intelligent control system for the detection of people with symptoms of Covid-19, which at the same time allows estimating a reading of parameters obtained from the thermographic camera, the possible suspected cases of people entering the Continental University. The development of the proposed system will allow obtaining real-time data of each user entering the Continental University, these parameters obtained will be stored in a SQL database that is linked to an HMI screen where the temperature of each person is displayed, if in case they exceed the established temperature ranges, instant access to the facility is restricted. The results of the research showed that the system design contributes to the prevention and mass propagation of Covid-19.","PeriodicalId":190255,"journal":{"name":"2022 2nd International Conference on Robotics, Automation and Artificial Intelligence (RAAI)","volume":"452 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121069643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SeungKyu Jo, JongIn Bae, Sanghoon Park, Jong-Soo Sohn
{"title":"Non-Uniform Quantization and Pruning Using Mu-law Companding","authors":"SeungKyu Jo, JongIn Bae, Sanghoon Park, Jong-Soo Sohn","doi":"10.1109/RAAI56146.2022.10092966","DOIUrl":"https://doi.org/10.1109/RAAI56146.2022.10092966","url":null,"abstract":"Contemporary deep learning models require high computation costs and large model sizes to realize high accuracy, which is not suitable for limited hardware resources such as mobile or edge devices. Model compression methods such as a quantization that reduces the precision of weights or activation and pruning that removes unimportant nodes have been proposed. However, these methods degrade the accuracy significantly. To overcome this limitation, we propose a non-uniform quantization and pruning method using mu-law companding, which preserves accuracy while simultaneously performing pruning and quantization. Experimental results using the ResNet-18 model on the ILSVRC2012 dataset showed that the accuracy increased by 0.4% in 5-bit and 0.3% in 4-bit, compared to FP32. In addition, we identified areas containing important information by changing the quantization interval, and visually demonstrate why the quantization model outperformed FP32 with Grad-CAM++.","PeriodicalId":190255,"journal":{"name":"2022 2nd International Conference on Robotics, Automation and Artificial Intelligence (RAAI)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128262661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robotic Process Automation with Ontology-enabled Skill-based Robot Task Model and Notation (RTMN)","authors":"Congyu Zhang Sprenger, Thomas Ribeaud","doi":"10.1109/RAAI56146.2022.10092996","DOIUrl":"https://doi.org/10.1109/RAAI56146.2022.10092996","url":null,"abstract":"Non-robotic experts are facing challenges in the fast-growing agile production industry. On the one hand robot programming is time consuming and costly and requires high levels of expertise. On the other hand, current systems are difficult understand and control. The authors propose to bridge this gap by introducing an intuitive way of modeling and programming robotic processes that enables nonexperts to plan and program robot tasks. The authors conducted a literature review, and then adopted both quantitative and qualitative methods in the project ACROBA to deepen the research in this topic. The authors propose a model-driven framework that combines modeling and programming in a graphical way using RTMN - an ontologyenabled skill-based robot task model and notation. Results from the validation process indicate that users find RTMN notations simple to understand and intuitive to use.","PeriodicalId":190255,"journal":{"name":"2022 2nd International Conference on Robotics, Automation and Artificial Intelligence (RAAI)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121817160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Analysis of the Effect of the Indoor Environment on L-shaped Movement of Small Size Four Rotor Drone","authors":"Shota Ishii, H. Toda","doi":"10.1109/raai56146.2022.10092993","DOIUrl":"https://doi.org/10.1109/raai56146.2022.10092993","url":null,"abstract":"In this paper, L-shaped movement control experiment assuming the narrow passage movement inside the building of four rotor helicopter was confirmed and the control stability was evaluated in a small size room (4× 4m, 3m height). In this experiment, a small 4-rotor drone was used and I made measurements in an L-shaped corridor, creating an environment with walls by means of partition board. To verify the problem of the orbit control with an L-shaped bend passage, an autonomous flight movement control experiment was conducted by 10 times with two different control strategies. Experimental result shows that (1) the stability of the L-shaped movement of the drone was seen the S.D. of the between trajectory of two 90 degree curves, and the overshoot fashion of the second 90 degree curves position. (2) The stability depends on the control method the PID and the proposed V2 control methods. (3) In an environment with obstacles, the overshoot is larger than when there are no obstacles, and the V2 control works effectively. This means that in such a case of drone flight of the L-shaped corridor in the building, the above two evaluation methods are useful, and stable flight can be achieved by reducing overshoot in corridors with walls. Our proposed the V2 condition control would be more useful in these case. As a result, it is necessary to carefully select the control method of the drone according to the usage scene.","PeriodicalId":190255,"journal":{"name":"2022 2nd International Conference on Robotics, Automation and Artificial Intelligence (RAAI)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122449773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
L. Zeghmi, A. Amamou, S. Kelouwani, Jonathan Boisclair, K. Agbossou
{"title":"A Kalman-Particle Hybrid Filter For Improved Localization of AGV In Indoor Environment","authors":"L. Zeghmi, A. Amamou, S. Kelouwani, Jonathan Boisclair, K. Agbossou","doi":"10.1109/RAAI56146.2022.10093002","DOIUrl":"https://doi.org/10.1109/RAAI56146.2022.10093002","url":null,"abstract":"Automated Guided Vehicles (AGV), are becoming a popular choice for a wide variety of industries ranging from large-scale warehouses and automotive assembly plants to the smaller healthcare industries. Due to this versatility of operational areas, their ability to self-locate becomes very critical. However, the traditional localization approach such as classic Monte Carlo Localization (MCL) fails as it relies on the noisy measurements from encoders. An alternative approach called Iterative Closest Point (ICP) uses LIDAR to avoid unbounded noises measurements but fails to generate consistent samples in symmetrical environments. In this paper, an Extended Kalman Filter (EKF) based proposal distribution is introduced that combines encoder measurements with the LIDAR data to overcome the limitations in the classic MCL. The EKF generates samples efficiently by incorporating an adaptive observation covariance estimation solution. The proposed method is implemented in Robotic Operating System (ROS) and the tests are performed in a simulated environment, generated by the Gazebo simulator. The results of the proposed method show an overall improvement in the localization accuracy in comparison to the classic localization frameworks.","PeriodicalId":190255,"journal":{"name":"2022 2nd International Conference on Robotics, Automation and Artificial Intelligence (RAAI)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133569610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Robinson, A. Venturi, Richard Dudley, Maurizio Bevilacqua, V. Donzella
{"title":"Methodology To Investigate Interference Using Off-The-Shelf LiDARs","authors":"J. Robinson, A. Venturi, Richard Dudley, Maurizio Bevilacqua, V. Donzella","doi":"10.1109/RAAI56146.2022.10092997","DOIUrl":"https://doi.org/10.1109/RAAI56146.2022.10092997","url":null,"abstract":"With the increase of assisted and automated functions provided on new vehicles and some automotive manufacturers starting to equip high end vehicles with LiDARs, there is a need to consider and analyse the effects of having LiDAR sensors on different vehicles interacting with each other in close proximity (e.g. cities, highways, crossroads, etc.). This paper investigates interference between 360 degree scanning LiDARs, which are one of the common typologies of automotive LiDARs. One LiDAR was selected as the victim, and 5 different LiDARs were used one by one as offenders. The victim and offending LiDARs were placed in a controlled environment to reduce sources of noise, and several sets of measurements were carried out and repeated at least four times. When the attacker and victim LiDARs were turned on at the same time some variations in the signals were observed, however the statistical variation was too low to be able to identify interference. As a result, this work highlights that there is no obvious effect of interference witnessed between the selected off-the-shelf 360 degree LiDAR sensors; this lack of interference can be attributed to the working principle of this type of LiDAR and low probability of having directly interfering beams, and also to the focusing and filtering optical circuits that the LiDARs have by design. The presented results confirm that mechanical scanning LiDAR can be used safely for assisted and automated driving even in situations with multiple LiDARs.","PeriodicalId":190255,"journal":{"name":"2022 2nd International Conference on Robotics, Automation and Artificial Intelligence (RAAI)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128000676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Analysis of Aerial Images Using Deep Learning to Identify Critical Areas in Natural Disasters","authors":"Nidhya Shivakumar","doi":"10.1109/RAAI56146.2022.10093000","DOIUrl":"https://doi.org/10.1109/RAAI56146.2022.10093000","url":null,"abstract":"Climate change has led to a rise in the frequency and intensity of natural disasters, which, in turn, affect a wide swath of human-populated areas. The use of Unmanned Aerial Vehicles (UAVs) is on the rise in mapping out the disaster areas in their immediate aftermath. However, processing the vast amount of data obtained can take several hours to days, costing crucial time that could be used in saving lives and infrastructure. In this study, methods were developed to automate and accelerate the identification of areas that are in critical need of assistance. A Faster R-CNN object detection model was built to classify buildings into damaged and undamaged with 90% precision, and to further sub-classify them by the type of damage (undamaged, flood, rubble). The number of high quality labeled images required for training models was increased by 163% by developing an auto-label generation technique using weak supervision. Ensemble modeling further improved the recall of model predictions by 16%, and with higher prediction accuracy, when analyzing disasters not included in the training set. The utility of the model was demonstrated by using it to produce an annotated video of the 2021 tornado damage in Kentucky. A Mask R-CNN segmentation model had the best performance overall and identified undamaged roads with 100% precision and building damage with precision greater than 84% and with recall greater than 74% for all object types. This study demonstrates the power of deep learning in the processing of images from disaster-stricken areas to aid search and rescue efforts and significantly reduce disaster response times.","PeriodicalId":190255,"journal":{"name":"2022 2nd International Conference on Robotics, Automation and Artificial Intelligence (RAAI)","volume":"420 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129236963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Control-based auto-scaling of compute nodes in a fog cluster with service time guarantees","authors":"Dinsha Vinod, PS Saikrishna","doi":"10.1109/RAAI56146.2022.10092987","DOIUrl":"https://doi.org/10.1109/RAAI56146.2022.10092987","url":null,"abstract":"In this paper, we adopt a control-theoretic approach for navigation of a differential drive mobile robot (DDMR) while offloading vision data to be processed in a scalable cluster of fog computing nodes. The approach comprises developing a linear parameter varying (LPV) framework for modeling and a linear matrix inequality (LMI) based control design for navigation of DDMR and auto-scaling compute nodes in fog cluster independently. We validate the developed theory for mobile robot navigation in the application environment with service time guarantees in processing offloaded vision data for object detection.","PeriodicalId":190255,"journal":{"name":"2022 2nd International Conference on Robotics, Automation and Artificial Intelligence (RAAI)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115762245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Interacting Multiple Model Kalman Filtering for Optimal Vehicle State Estimation","authors":"Giseo Park","doi":"10.1109/RAAI56146.2022.10093004","DOIUrl":"https://doi.org/10.1109/RAAI56146.2022.10093004","url":null,"abstract":"In this paper, we propose a new method for optimal vehicle condition estimation by utilizing low-cost sensor fusion using in-vehicle sensors and a standalone Global Positioning System (GPS). Estimation targets are non-measurable vehicle conditions such as vehicle side slip angles and tire cornering stiffness values of front and rear axles. Interacting Multiple Model (IMM) Kalman filters are designed to combine the outputs of two Kalman filters, each based on vehicle kinematics and bike models. To optimally combine the outputs of these two Kalman filter outputs, we compute the weighted probabilities of each output based on a probabilistic method that reflects each model feature in real time. The final estimated performance of the proposed IMM Kalman filter was confirmed based on the experimental results. In particular, comparisons of single Kalman filters and estimation accuracy are performed in detail. The main advantages of the proposed estimation algorithm are summarized as 1) optimality according to vehicle model combination and 2) accurate estimation of tire cornering stiffness.","PeriodicalId":190255,"journal":{"name":"2022 2nd International Conference on Robotics, Automation and Artificial Intelligence (RAAI)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127936094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kristi Žampachů, Jirí Ulrich, Tomáš Rouček, Martin Stefanec, Dominik Dvořáček, Laurenz Fedotoff, D. Hofstadler, Fatemeh Rekabi-Bana, G. Broughton, F. Arvin, T. Schmickl, T. Krajník
{"title":"A Vision-based System for Social Insect Tracking","authors":"Kristi Žampachů, Jirí Ulrich, Tomáš Rouček, Martin Stefanec, Dominik Dvořáček, Laurenz Fedotoff, D. Hofstadler, Fatemeh Rekabi-Bana, G. Broughton, F. Arvin, T. Schmickl, T. Krajník","doi":"10.1109/RAAI56146.2022.10092977","DOIUrl":"https://doi.org/10.1109/RAAI56146.2022.10092977","url":null,"abstract":"Socia1 insects, especially honeybees, play an essential role in nature, and their recent decline threatens the stability of many ecosystems. The behaviour of social insect colonies is typically governed by a central individual, e.g., by the honeybee queen. The RoboRoyale project aims to use robots to interact with the queen to affect her behaviour and the entire colony’s activity. This paper presents a necessary component of such a robotic system, a method capable of real-time detection, localisation, and tracking of the honeybee queen inside a large colony. To overcome problems with occlusions and computational complexity, we propose to combine two vision-based methods for fiducial marker localisation and tracking. The experiments performed on the data captured from inside the beehives demonstrate that the resulting algorithm outperforms its predecessors in terms of detection precision, recall, and localisation accuracy. The achieved performance allowed us to integrate the method into a larger system capable of physically tracking a honeybee queen inside its colony. The ability to observe the queen in fine detail for prolonged periods of time already resulted in unique observations of queen-worker interactions. The knowledge will be crucial in designing a system capable of interacting with the honeybee queen and affecting her activity.","PeriodicalId":190255,"journal":{"name":"2022 2nd International Conference on Robotics, Automation and Artificial Intelligence (RAAI)","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121524502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}