{"title":"Camera Vignetting Model and its Effects on Deep Neural Networks for Object Detection","authors":"Kmeid Saad, Stefan-Alexander Schneider","doi":"10.1109/ICCVE45908.2019.8965233","DOIUrl":null,"url":null,"abstract":"In this paper, we describe a new approach for synthetic image augmentation and its advantages in training Deep Neural Networks (DNNs) for object classification and localization. To address the need for a significant amount of data when training DNNs, for image-based ADAS functions, our method relies on virtually generated scenarios augmented via a physics-based camera model. The camera model implements various optical effects on ideal-synthetic images. For the scope of this paper, we illustrate the performance differences associated with the vignetting effect when training DNNs with and without image augmentation. We show that training on images altered by our camera vignetting model yield to a better performance than using ideal-synthetic images, additionally we illustrate the relationship between the network's performance results and the implemented effect (vignetting in this case). For a start, our results open the possibility for using camera models for training neural networks on synthetic data and pave the way toward further investigations on significant optical and image sensor effects to be modeled/implemented for performance enhancement during the training process. The approach is conducted and evaluated by training a DNN for car detection using the Karlsruhe Institute of Technology and Toyota Technological Institute at Chicago (KITTI) and Virtual KITTI (VKITTI) datasets.","PeriodicalId":384049,"journal":{"name":"2019 IEEE International Conference on Connected Vehicles and Expo (ICCVE)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE International Conference on Connected Vehicles and Expo (ICCVE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCVE45908.2019.8965233","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
In this paper, we describe a new approach for synthetic image augmentation and its advantages in training Deep Neural Networks (DNNs) for object classification and localization. To address the need for a significant amount of data when training DNNs, for image-based ADAS functions, our method relies on virtually generated scenarios augmented via a physics-based camera model. The camera model implements various optical effects on ideal-synthetic images. For the scope of this paper, we illustrate the performance differences associated with the vignetting effect when training DNNs with and without image augmentation. We show that training on images altered by our camera vignetting model yield to a better performance than using ideal-synthetic images, additionally we illustrate the relationship between the network's performance results and the implemented effect (vignetting in this case). For a start, our results open the possibility for using camera models for training neural networks on synthetic data and pave the way toward further investigations on significant optical and image sensor effects to be modeled/implemented for performance enhancement during the training process. The approach is conducted and evaluated by training a DNN for car detection using the Karlsruhe Institute of Technology and Toyota Technological Institute at Chicago (KITTI) and Virtual KITTI (VKITTI) datasets.