Vehicle detection in high resolution satellite images with joint-layer deep convolutional neural networks

Yanjun Liu, Na Liu, H. Huo, T. Fang
{"title":"Vehicle detection in high resolution satellite images with joint-layer deep convolutional neural networks","authors":"Yanjun Liu, Na Liu, H. Huo, T. Fang","doi":"10.1109/M2VIP.2016.7827266","DOIUrl":null,"url":null,"abstract":"Vehicle detection can provide volumes of useful data for city planning and transport management. It has always been a challenging task because of various complicated backgrounds and the relatively small sizes of targets, especially in high resolution satellite images. A novel model called joint-layer deep convolutional neural networks (JLDCNNs), which joins features in the higher layers and the lower layers of deep convolutional neural networks (DCNNs), is proposed in this paper. JLDCNNs aim to cover different scales and detect vehicles from complex satellite images rapidly by overcoming the insufficient feature extraction of traditional DCNNs. The model is evaluated and compared with traditional DCNNs and other methods on our challenging dataset which includes 20 high resolution satellite images (including over 2400 vehicles) collected from Google Earth. JLDCNNs significantly improve the precision rate by 16% and the recall rate by 6% compared with traditional DCNNs, let alone outperform other traditional methods.","PeriodicalId":125468,"journal":{"name":"2016 23rd International Conference on Mechatronics and Machine Vision in Practice (M2VIP)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 23rd International Conference on Mechatronics and Machine Vision in Practice (M2VIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/M2VIP.2016.7827266","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

Vehicle detection can provide volumes of useful data for city planning and transport management. It has always been a challenging task because of various complicated backgrounds and the relatively small sizes of targets, especially in high resolution satellite images. A novel model called joint-layer deep convolutional neural networks (JLDCNNs), which joins features in the higher layers and the lower layers of deep convolutional neural networks (DCNNs), is proposed in this paper. JLDCNNs aim to cover different scales and detect vehicles from complex satellite images rapidly by overcoming the insufficient feature extraction of traditional DCNNs. The model is evaluated and compared with traditional DCNNs and other methods on our challenging dataset which includes 20 high resolution satellite images (including over 2400 vehicles) collected from Google Earth. JLDCNNs significantly improve the precision rate by 16% and the recall rate by 6% compared with traditional DCNNs, let alone outperform other traditional methods.
基于联合层深度卷积神经网络的高分辨率卫星图像车辆检测
车辆检测可以为城市规划和交通管理提供大量有用的数据。由于各种复杂的背景和相对较小的目标尺寸,特别是在高分辨率卫星图像中,这一直是一项具有挑战性的任务。本文提出了一种新的深度卷积神经网络模型,称为联合层深度卷积神经网络(joint-layer deep convolutional neural network, JLDCNNs),它将深度卷积神经网络(deep convolutional neural network, DCNNs)的高层和低层特征连接起来。jldcnn克服了传统DCNNs特征提取不足的缺点,旨在覆盖不同尺度,从复杂的卫星图像中快速检测车辆。我们在谷歌地球上收集了20张高分辨率卫星图像(包括2400多辆汽车),并对该模型与传统的DCNNs和其他方法进行了评估和比较。与传统的DCNNs相比,JLDCNNs的准确率提高了16%,召回率提高了6%,更不用说优于其他传统方法了。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信