{"title":"Multiscale information fusion-based deep learning framework for campus vehicle detection","authors":"Zengyong Xu, M. Rao","doi":"10.1080/19479832.2020.1845245","DOIUrl":null,"url":null,"abstract":"ABSTRACT Vehicle detection is a hotspot in the field of remote sensing image analysis. In particular, campus vehicle detection can assess the density of traffic in an area and provide security for students. The detection accuracy is low for dense vehicle areas or complex background areas. According to the feature of campus vehicle, we propose a multiscale information fusion strategy to construct a novel deep learning framework for campus vehicle detection. This new method based on Single Shot MultiBox Detector (SSD) combines a lightweight deep neural network MobileNet to extract features. A sub-network composed of multiple convolutional layers is connected to detect and locate the object. This method fuses feature information on multiple levels. When removing overlapped object candidate regions, the threshold value is set based on the non-maximum suppression method to eliminate redundant candidate regions. Therefore, the generated negative samples are reduced, which guarantees the stable effect of the proposed model. Experiments show that the proposed vehicle detection method has a faster detection speed. The robustness and accuracy of the proposed model are better than other related vehicle detection methods.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"12 1","pages":"83 - 97"},"PeriodicalIF":1.8000,"publicationDate":"2020-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/19479832.2020.1845245","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Image and Data Fusion","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/19479832.2020.1845245","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"REMOTE SENSING","Score":null,"Total":0}
引用次数: 3
Abstract
ABSTRACT Vehicle detection is a hotspot in the field of remote sensing image analysis. In particular, campus vehicle detection can assess the density of traffic in an area and provide security for students. The detection accuracy is low for dense vehicle areas or complex background areas. According to the feature of campus vehicle, we propose a multiscale information fusion strategy to construct a novel deep learning framework for campus vehicle detection. This new method based on Single Shot MultiBox Detector (SSD) combines a lightweight deep neural network MobileNet to extract features. A sub-network composed of multiple convolutional layers is connected to detect and locate the object. This method fuses feature information on multiple levels. When removing overlapped object candidate regions, the threshold value is set based on the non-maximum suppression method to eliminate redundant candidate regions. Therefore, the generated negative samples are reduced, which guarantees the stable effect of the proposed model. Experiments show that the proposed vehicle detection method has a faster detection speed. The robustness and accuracy of the proposed model are better than other related vehicle detection methods.
期刊介绍:
International Journal of Image and Data Fusion provides a single source of information for all aspects of image and data fusion methodologies, developments, techniques and applications. Image and data fusion techniques are important for combining the many sources of satellite, airborne and ground based imaging systems, and integrating these with other related data sets for enhanced information extraction and decision making. Image and data fusion aims at the integration of multi-sensor, multi-temporal, multi-resolution and multi-platform image data, together with geospatial data, GIS, in-situ, and other statistical data sets for improved information extraction, as well as to increase the reliability of the information. This leads to more accurate information that provides for robust operational performance, i.e. increased confidence, reduced ambiguity and improved classification enabling evidence based management. The journal welcomes original research papers, review papers, shorter letters, technical articles, book reviews and conference reports in all areas of image and data fusion including, but not limited to, the following aspects and topics: • Automatic registration/geometric aspects of fusing images with different spatial, spectral, temporal resolutions; phase information; or acquired in different modes • Pixel, feature and decision level fusion algorithms and methodologies • Data Assimilation: fusing data with models • Multi-source classification and information extraction • Integration of satellite, airborne and terrestrial sensor systems • Fusing temporal data sets for change detection studies (e.g. for Land Cover/Land Use Change studies) • Image and data mining from multi-platform, multi-source, multi-scale, multi-temporal data sets (e.g. geometric information, topological information, statistical information, etc.).