Adrian-Paul Botezatu, L. Ferariu, A. Burlacu, Teodor-Andrei Sauciuc
{"title":"Early Fusion Based CNN Architecture for Visual Servoing Systems","authors":"Adrian-Paul Botezatu, L. Ferariu, A. Burlacu, Teodor-Andrei Sauciuc","doi":"10.1109/MMAR55195.2022.9874328","DOIUrl":null,"url":null,"abstract":"Visual feedback control refers to the usage of image data to design the motion of a robotic system. This type of problem is equivalent to generating linear and angular velocities that will drive a robotic systems from an initial image to target one. Classic visual servoing methods have disadvantages, such as high challenges on extracting and tracking visual features, regardless of the environmental conditions, and nonlinear dependencies regarding camera calibration. During the last years, these limitations have been alleviated by employing Convolutional Neural Networks (CNNs). The main goal of this work is to increase the performance of CNNs in visual feedback control expanding the neural input arrays with extra available data. For this, extra maps created via region-based segmentation are considered as input in an early fusion based architecture. These ready-to-use simplified descriptions of the initial and final layouts can help CNN understand the scenes, and compute accurate velocities. The role of the segmented maps is experimentally investigated on two different architectures that exemplify the suggested design idea. The results show that CNNs with input fusion offer a better approximation of the linear and angular velocities, and proper robustness to segmentation errors, as well.","PeriodicalId":169528,"journal":{"name":"2022 26th International Conference on Methods and Models in Automation and Robotics (MMAR)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 26th International Conference on Methods and Models in Automation and Robotics (MMAR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MMAR55195.2022.9874328","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Visual feedback control refers to the usage of image data to design the motion of a robotic system. This type of problem is equivalent to generating linear and angular velocities that will drive a robotic systems from an initial image to target one. Classic visual servoing methods have disadvantages, such as high challenges on extracting and tracking visual features, regardless of the environmental conditions, and nonlinear dependencies regarding camera calibration. During the last years, these limitations have been alleviated by employing Convolutional Neural Networks (CNNs). The main goal of this work is to increase the performance of CNNs in visual feedback control expanding the neural input arrays with extra available data. For this, extra maps created via region-based segmentation are considered as input in an early fusion based architecture. These ready-to-use simplified descriptions of the initial and final layouts can help CNN understand the scenes, and compute accurate velocities. The role of the segmented maps is experimentally investigated on two different architectures that exemplify the suggested design idea. The results show that CNNs with input fusion offer a better approximation of the linear and angular velocities, and proper robustness to segmentation errors, as well.