Mingtao Dong, Yuanhao Cui, Xiaojun Jing, Xiaokang Liu, Jianquan Li
{"title":"基于数据增强的SAR图像端到端目标检测与分类","authors":"Mingtao Dong, Yuanhao Cui, Xiaojun Jing, Xiaokang Liu, Jianquan Li","doi":"10.1109/COMPEM.2019.8779096","DOIUrl":null,"url":null,"abstract":"While applying traditional algorithm to synthetic aperture radar automatic target recognition (SAR-ATR) is facing difficulties, deep learning-based end-to-end object detection algorithms are becoming better options due to the automatic feature extraction and availability of high-quality data. In this paper, both single-staged and two-staged end-to-end models are experimented. We proposed modified Faster R-CNN models and SSD models to address SAR-ATR. Data augmentation techniques including random flipping, multiplying, rotation, translation, and flipping are applied to MSTAR SAR dataset to solve problems related to limited training samples. Transfer learning of SSD models and Faster R-CNN models on COCO dataset are utilized. Both existing algorithms and proposed algorithms are tested in ten-class MSTAR dataset. Experimental results show that SSD-Inception with widened network and MobileNet-SSD with light weight structure perform with much faster speed and cheaper computational cost, hundreds of times faster than Faster R-CNNs. MobileNet-SSD is especially suitable for mobile devices with 0.028 second per batch*step. Faster R-CNN with ResNet-101 and Inception ResNet perform in slightly higher accuracy than SSDs, reaching 99.4% mAP. MobileNet-SSD and SSD-Inception reach 96.79% and 99.16% mAP respectively.","PeriodicalId":342849,"journal":{"name":"2019 IEEE International Conference on Computational Electromagnetics (ICCEM)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"End-to-End Target Detection and Classification with Data Augmentation in SAR Images\",\"authors\":\"Mingtao Dong, Yuanhao Cui, Xiaojun Jing, Xiaokang Liu, Jianquan Li\",\"doi\":\"10.1109/COMPEM.2019.8779096\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"While applying traditional algorithm to synthetic aperture radar automatic target recognition (SAR-ATR) is facing difficulties, deep learning-based end-to-end object detection algorithms are becoming better options due to the automatic feature extraction and availability of high-quality data. In this paper, both single-staged and two-staged end-to-end models are experimented. We proposed modified Faster R-CNN models and SSD models to address SAR-ATR. Data augmentation techniques including random flipping, multiplying, rotation, translation, and flipping are applied to MSTAR SAR dataset to solve problems related to limited training samples. Transfer learning of SSD models and Faster R-CNN models on COCO dataset are utilized. Both existing algorithms and proposed algorithms are tested in ten-class MSTAR dataset. Experimental results show that SSD-Inception with widened network and MobileNet-SSD with light weight structure perform with much faster speed and cheaper computational cost, hundreds of times faster than Faster R-CNNs. MobileNet-SSD is especially suitable for mobile devices with 0.028 second per batch*step. Faster R-CNN with ResNet-101 and Inception ResNet perform in slightly higher accuracy than SSDs, reaching 99.4% mAP. MobileNet-SSD and SSD-Inception reach 96.79% and 99.16% mAP respectively.\",\"PeriodicalId\":342849,\"journal\":{\"name\":\"2019 IEEE International Conference on Computational Electromagnetics (ICCEM)\",\"volume\":\"4 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-03-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 IEEE International Conference on Computational Electromagnetics (ICCEM)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/COMPEM.2019.8779096\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE International Conference on Computational Electromagnetics (ICCEM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/COMPEM.2019.8779096","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
摘要
在传统算法应用于合成孔径雷达自动目标识别(SAR-ATR)面临困难的情况下,基于深度学习的端到端目标检测算法由于其自动特征提取和高质量数据的可用性而成为更好的选择。本文分别对单阶段和两阶段端到端模型进行了实验。我们提出了改进的Faster R-CNN模型和SSD模型来解决SAR-ATR问题。将随机翻转、乘法、旋转、平移、翻转等数据增强技术应用于MSTAR SAR数据集,解决训练样本有限的问题。利用SSD模型和Faster R-CNN模型在COCO数据集上的迁移学习。在十类MSTAR数据集上对现有算法和所提算法进行了测试。实验结果表明,具有扩宽网络的SSD-Inception和具有轻量化结构的MobileNet-SSD具有比faster r - cnn更快的速度和更低的计算成本。MobileNet-SSD特别适合移动设备,每批*步0.028秒。使用ResNet-101和Inception ResNet的更快R-CNN的准确率略高于ssd,达到99.4% mAP。MobileNet-SSD和SSD-Inception的mAP分别达到96.79%和99.16%。
End-to-End Target Detection and Classification with Data Augmentation in SAR Images
While applying traditional algorithm to synthetic aperture radar automatic target recognition (SAR-ATR) is facing difficulties, deep learning-based end-to-end object detection algorithms are becoming better options due to the automatic feature extraction and availability of high-quality data. In this paper, both single-staged and two-staged end-to-end models are experimented. We proposed modified Faster R-CNN models and SSD models to address SAR-ATR. Data augmentation techniques including random flipping, multiplying, rotation, translation, and flipping are applied to MSTAR SAR dataset to solve problems related to limited training samples. Transfer learning of SSD models and Faster R-CNN models on COCO dataset are utilized. Both existing algorithms and proposed algorithms are tested in ten-class MSTAR dataset. Experimental results show that SSD-Inception with widened network and MobileNet-SSD with light weight structure perform with much faster speed and cheaper computational cost, hundreds of times faster than Faster R-CNNs. MobileNet-SSD is especially suitable for mobile devices with 0.028 second per batch*step. Faster R-CNN with ResNet-101 and Inception ResNet perform in slightly higher accuracy than SSDs, reaching 99.4% mAP. MobileNet-SSD and SSD-Inception reach 96.79% and 99.16% mAP respectively.