基于分层融合的民用目标探测

Balasubramanian Lakshminarayanan, H. Qi
{"title":"基于分层融合的民用目标探测","authors":"Balasubramanian Lakshminarayanan, H. Qi","doi":"10.1109/AIPR.2005.22","DOIUrl":null,"url":null,"abstract":"Automatic target recognition (ATR) is the process of aided or unaided target detection and recognition using data from different sensors. Fusion techniques are used to improve ATR since this reduces system dependence on a single sensor and increases noise tolerance. In this work, ATR is performed on civilian targets which are considered more difficult to classify than military targets. The dataset is provided by the Night Vision & Electronic Sensors Directorate (NVESD) and is collected using the sensor fusion testbed (SFTB) developed by Northrop Grumman Mission Systems. Stationary color and infrared cameras capture images of seven different vehicles at different orientations and distances. Targets include two sedans, two SUVs, two light trucks and a heavy truck. Fusion is performed at the event level and sensor level using temporal and behavior-knowledge-space (BKS) fusion respectively. It is shown that fusion provides better and robust classification compared to classification of individual frames without fusion. The classification experiment shows, on an average, mean classification rates of 65.0%, 70.1% and 77.7% for individual frame classification, temporal fusion and BKS fusion respectively. It is demonstrated that the classification accuracy increases as the level of fusion goes higher. By combining targets into cars, SUVs and light trucks and thereby reducing the number of classes to three, higher mean classification rates of 75.4%, 90.0% and 94.8% were obtained","PeriodicalId":130204,"journal":{"name":"34th Applied Imagery and Pattern Recognition Workshop (AIPR'05)","volume":"77 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2005-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Civilian target detection using hierarchical fusion\",\"authors\":\"Balasubramanian Lakshminarayanan, H. Qi\",\"doi\":\"10.1109/AIPR.2005.22\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Automatic target recognition (ATR) is the process of aided or unaided target detection and recognition using data from different sensors. Fusion techniques are used to improve ATR since this reduces system dependence on a single sensor and increases noise tolerance. In this work, ATR is performed on civilian targets which are considered more difficult to classify than military targets. The dataset is provided by the Night Vision & Electronic Sensors Directorate (NVESD) and is collected using the sensor fusion testbed (SFTB) developed by Northrop Grumman Mission Systems. Stationary color and infrared cameras capture images of seven different vehicles at different orientations and distances. Targets include two sedans, two SUVs, two light trucks and a heavy truck. Fusion is performed at the event level and sensor level using temporal and behavior-knowledge-space (BKS) fusion respectively. It is shown that fusion provides better and robust classification compared to classification of individual frames without fusion. The classification experiment shows, on an average, mean classification rates of 65.0%, 70.1% and 77.7% for individual frame classification, temporal fusion and BKS fusion respectively. It is demonstrated that the classification accuracy increases as the level of fusion goes higher. By combining targets into cars, SUVs and light trucks and thereby reducing the number of classes to three, higher mean classification rates of 75.4%, 90.0% and 94.8% were obtained\",\"PeriodicalId\":130204,\"journal\":{\"name\":\"34th Applied Imagery and Pattern Recognition Workshop (AIPR'05)\",\"volume\":\"77 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2005-10-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"34th Applied Imagery and Pattern Recognition Workshop (AIPR'05)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/AIPR.2005.22\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"34th Applied Imagery and Pattern Recognition Workshop (AIPR'05)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AIPR.2005.22","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

自动目标识别(ATR)是利用来自不同传感器的数据进行辅助或非辅助目标检测和识别的过程。融合技术用于提高ATR,因为这减少了系统对单个传感器的依赖,并提高了噪声容忍度。在这项工作中,ATR是对被认为比军事目标更难分类的民用目标进行的。该数据集由夜视和电子传感器理事会(NVESD)提供,并使用诺斯罗普·格鲁曼任务系统公司开发的传感器融合试验台(SFTB)收集。固定的彩色和红外摄像机捕捉到7辆不同车辆在不同方向和距离上的图像。目标包括两辆轿车,两辆suv,两辆轻型卡车和一辆重型卡车。在事件级和传感器级分别使用时间和行为-知识空间(BKS)融合进行融合。结果表明,与不融合的单个帧分类相比,融合提供了更好的鲁棒性分类。分类实验表明,单帧分类、时间融合和BKS融合的平均分类率分别为65.0%、70.1%和77.7%。结果表明,随着融合水平的提高,分类精度也随之提高。通过将目标细分为轿车、suv和轻型卡车,将分类数量减少到3个,平均分类率达到了75.4%、90.0%和94.8%
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Civilian target detection using hierarchical fusion
Automatic target recognition (ATR) is the process of aided or unaided target detection and recognition using data from different sensors. Fusion techniques are used to improve ATR since this reduces system dependence on a single sensor and increases noise tolerance. In this work, ATR is performed on civilian targets which are considered more difficult to classify than military targets. The dataset is provided by the Night Vision & Electronic Sensors Directorate (NVESD) and is collected using the sensor fusion testbed (SFTB) developed by Northrop Grumman Mission Systems. Stationary color and infrared cameras capture images of seven different vehicles at different orientations and distances. Targets include two sedans, two SUVs, two light trucks and a heavy truck. Fusion is performed at the event level and sensor level using temporal and behavior-knowledge-space (BKS) fusion respectively. It is shown that fusion provides better and robust classification compared to classification of individual frames without fusion. The classification experiment shows, on an average, mean classification rates of 65.0%, 70.1% and 77.7% for individual frame classification, temporal fusion and BKS fusion respectively. It is demonstrated that the classification accuracy increases as the level of fusion goes higher. By combining targets into cars, SUVs and light trucks and thereby reducing the number of classes to three, higher mean classification rates of 75.4%, 90.0% and 94.8% were obtained
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信