2023 IEEE Radar Conference (RadarConf23)最新文献

筛选
英文 中文
MGRFT-Based Coherent Integration Method for High-Speed Maneuvering Target with Range Ambiguity 基于mgrft的距离模糊高速机动目标相干积分方法
2023 IEEE Radar Conference (RadarConf23) Pub Date : 2023-05-01 DOI: 10.1109/RadarConf2351548.2023.10149681
Kaiyao Wang, Xiaolong Li, Haixu Chen, Mingxing Wang
{"title":"MGRFT-Based Coherent Integration Method for High-Speed Maneuvering Target with Range Ambiguity","authors":"Kaiyao Wang, Xiaolong Li, Haixu Chen, Mingxing Wang","doi":"10.1109/RadarConf2351548.2023.10149681","DOIUrl":"https://doi.org/10.1109/RadarConf2351548.2023.10149681","url":null,"abstract":"When radar detects a high-speed maneuvering target, not only will the phenomena of range migration (RM) and Doppler migration (DM) appear, but also the phenomenon of range ambiguity, which poses challenges to the traditional accumulation processing method. In this paper, we first establish the target echo model with range ambiguity based on the spatial geometric model. On this basis, we propose a coherent integration method based on the modulo generalized Radon Fourier transform (MGRFT). By performing the modulo addressing operation during the joint search of motion parameters, the proposed method can correct RM and DM and deal with the problem of trajectory breakage under range ambiguity so as to achieve the coherent integration of echo energy and effectively improve the signal-to-noise ratio (SNR). Finally, experimental results demonstrate the effectiveness of the proposed algorithm.","PeriodicalId":168311,"journal":{"name":"2023 IEEE Radar Conference (RadarConf23)","volume":"120 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123123082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Decentralized Multi-Target Tracking for Netted Radar Systems with Non-Overlapping Field of View 非重叠视场网络雷达系统的分散多目标跟踪
2023 IEEE Radar Conference (RadarConf23) Pub Date : 2023-05-01 DOI: 10.1109/RadarConf2351548.2023.10149723
Cong Peng, Haiyi Mao, Yue Liu, Lei Chai, Wei Yi
{"title":"Decentralized Multi-Target Tracking for Netted Radar Systems with Non-Overlapping Field of View","authors":"Cong Peng, Haiyi Mao, Yue Liu, Lei Chai, Wei Yi","doi":"10.1109/RadarConf2351548.2023.10149723","DOIUrl":"https://doi.org/10.1109/RadarConf2351548.2023.10149723","url":null,"abstract":"In this paper, a robust and high-accuracy decentral-ized fusion strategy is proposed for multi-target tracking (MTT) in netted radar systems with non-overlapping field of view (FoV). Each radar in the network runs a local Probability Hypothetical Density (PHD) filter with the decentralized consensus protocol to reduce communication bandwidth and eliminate information inconsistency among nodes. In the above process, the most critical core is an effective fusion strategy. Our proposed method adopts the geometric covariance intersection (GCI) rule to improve fusion accuracy. However, the standard GCI fusion is not suitable for the netted radar systems with non-overlapping FoV because it only focuses on the targets within the intersection of radar FoVs. Consider that, we extend the weights in GCI fusion to be a set of state-dependent weights instead of scalars to perform GCI fusion in a more robust manner. Furthermore, the radar FoVs are always unknown and time-varying in practical scenarios. Towards addressing this case, we combine a clustering algorithm based on highest posterior density to maintain a good fusion performance. The Gaussian mixture implementation of the proposed method is provided. Numerical simulations are designed to verify the effectiveness of the proposed method.","PeriodicalId":168311,"journal":{"name":"2023 IEEE Radar Conference (RadarConf23)","volume":"1034 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123134230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive Target Enhancer: Bridging the Gap between Synthetic and Measured SAR Images for Automatic Target Recognition 自适应目标增强器:弥合合成和测量SAR图像之间的差距,用于自动目标识别
2023 IEEE Radar Conference (RadarConf23) Pub Date : 2023-05-01 DOI: 10.1109/RadarConf2351548.2023.10149739
A. Campos, Ricardo D. Molin, Lucas P. Ramos, Renato B. Machado, V. Vu, M. Pettersson
{"title":"Adaptive Target Enhancer: Bridging the Gap between Synthetic and Measured SAR Images for Automatic Target Recognition","authors":"A. Campos, Ricardo D. Molin, Lucas P. Ramos, Renato B. Machado, V. Vu, M. Pettersson","doi":"10.1109/RadarConf2351548.2023.10149739","DOIUrl":"https://doi.org/10.1109/RadarConf2351548.2023.10149739","url":null,"abstract":"Automatic target recognition (ATR) algorithms have been successfully used for vehicle classification in synthetic aperture radar (SAR) images over the past few decades. For this application, however, the scarcity of labeled data is often a limiting factor for supervised approaches. While the advent of computer-simulated images may result in additional data for ATR, there is still a substantial gap between synthetic and measured images. In this paper, we propose the so-called adaptive target enhancer (ATE), a tool designed to automatically delimit and weight the region of an image that contains or is affected by the presence of a target. Results for the publicly released Synthetic and Measured Paired and Labeled Experiment (SAMPLE) dataset show that, by defining regions of interest and suppressing the background, we can increase the classification accuracy from 68% to 84% while only using artificially generated images for training.","PeriodicalId":168311,"journal":{"name":"2023 IEEE Radar Conference (RadarConf23)","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124806228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Narrowband Criterion for Arrays of General Geometry 一般几何阵列的一个窄带判据
2023 IEEE Radar Conference (RadarConf23) Pub Date : 2023-05-01 DOI: 10.1109/RadarConf2351548.2023.10149582
M. Leifer
{"title":"A Narrowband Criterion for Arrays of General Geometry","authors":"M. Leifer","doi":"10.1109/RadarConf2351548.2023.10149582","DOIUrl":"https://doi.org/10.1109/RadarConf2351548.2023.10149582","url":null,"abstract":"This paper derives the bandwidth that signals should be limited to, by filtration or frequency binning, to ensure that they are narrowband and are processed correctly by array processing algorithms for, e.g., interference nulling and angle of arrival estimation. The narrowband condition is defined as the maximum bandwidth signal that can be fully described by a single significant eigenvalue of the array covariance matrix. A simple closed-form expression for this bandwidth is derived that applies to arrays of arbitrary geometry and that clearly indicates how the maximum bandwidth scales with signal SNR and with array geometry.","PeriodicalId":168311,"journal":{"name":"2023 IEEE Radar Conference (RadarConf23)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123597142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Snow Radar Echogram Layer Tracker: Deep Neural Networks for radar data from NASA Operation IceBridge 雪雷达回波图层跟踪器:NASA冰桥行动雷达数据的深度神经网络
2023 IEEE Radar Conference (RadarConf23) Pub Date : 2023-05-01 DOI: 10.1109/RadarConf2351548.2023.10149734
O. Ibikunle, Hara Madhav Talasila, D. Varshney, J. Paden, Jilu Li, M. Rahnemoonfar
{"title":"Snow Radar Echogram Layer Tracker: Deep Neural Networks for radar data from NASA Operation IceBridge","authors":"O. Ibikunle, Hara Madhav Talasila, D. Varshney, J. Paden, Jilu Li, M. Rahnemoonfar","doi":"10.1109/RadarConf2351548.2023.10149734","DOIUrl":"https://doi.org/10.1109/RadarConf2351548.2023.10149734","url":null,"abstract":"This paper documents the performance of two deep learning models developed to automatically track internal layers in Snow Radar echograms. A novel iterative RowBlock approach is developed to circumvent the small training-data problem peculiar to radar data by recasting pixel-wise dense prediction problem as a multi-class classification task with millions of training data. The proposed models, Skip_MLP and LSTM_PE, achieved tracking accuracies of 81.2 % and 87.9%, respectively, on echograms from the dry snow zone in Greenland. Moreover, 96.7% and 97.3% of the errors are less than or equal to two pixels for both models respectively. The tracked layers were used to estimate annual accumulation over two decades and compared with Regional Atmosphere Model (MAR) estimates to yield a coefficient of determination of 0.943, thus validating this approach.","PeriodicalId":168311,"journal":{"name":"2023 IEEE Radar Conference (RadarConf23)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126659223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Analysis of Keller Cones for RF Imaging 凯勒锥用于射频成像的分析
2023 IEEE Radar Conference (RadarConf23) Pub Date : 2023-05-01 DOI: 10.1109/RadarConf2351548.2023.10149785
Anurag Pallaprolu, Belal Korany, Y. Mostofi
{"title":"Analysis of Keller Cones for RF Imaging","authors":"Anurag Pallaprolu, Belal Korany, Y. Mostofi","doi":"10.1109/RadarConf2351548.2023.10149785","DOIUrl":"https://doi.org/10.1109/RadarConf2351548.2023.10149785","url":null,"abstract":"Imaging still objects with the received signal power of off-the-shelf WiFi transceivers is considerably challenging. The interaction of object edges with the incoming wave, dictated by the Geometrical Theory of Diffraction and the resulting Keller cones, presents new possibilities for imaging with WiFi via edge tracing. In this paper, we are interested in bringing a comprehensive understanding to the impact of several different parameters on the Keller cones and the corresponding edge-based imaging, thereby developing a foundation for a methodical imaging system design. More specifically, we consider the impact of parameters such as curvature of a soft edge, edge orientation, distance to the receiver grid, transmitter location, and other parameters on edge-based WiFi imaging, via both analysis and extensive experimentation. We finally show that Keller cones can be used for imaging objects that lack visibly-sharp edges, as long as the curvature of the edge is small enough, by imaging a number of such daily objects.","PeriodicalId":168311,"journal":{"name":"2023 IEEE Radar Conference (RadarConf23)","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114136800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RCS-Based Imaging of Extended Targets for Classification in Multistatic Radar Systems 基于rcs的多基地雷达扩展目标分类成像
2023 IEEE Radar Conference (RadarConf23) Pub Date : 2023-05-01 DOI: 10.1109/RadarConf2351548.2023.10149779
S. Sruti, A. A. Kumar, K. Giridhar
{"title":"RCS-Based Imaging of Extended Targets for Classification in Multistatic Radar Systems","authors":"S. Sruti, A. A. Kumar, K. Giridhar","doi":"10.1109/RadarConf2351548.2023.10149779","DOIUrl":"https://doi.org/10.1109/RadarConf2351548.2023.10149779","url":null,"abstract":"Efficient non-co-operative target imaging and classification are crucial for defense radar systems. Radar Cross Section (RCS) images provide distinctive characteristics of targets. They are easily measurable and hence can be used as features for accurate target classification. In this work, a low-complexity composite RCS imaging technique of the detected extended targets is developed using the inverse synthetic aperture radar oriented approach in a distributed multistatic radar system. The algorithm employs what we call a “floating grid-based formulation” which helps to overcome the exact time and phase alignment shortcomings in the fusion of measurements. The RCS values in the grid considered are estimated using a robust recovery technique. Bistatic radar cross-section values obtained for different transmitter-receiver pairs are fused to obtain a comprehensive RCS image of the target. This image is also utilized to derive the synthetic shape of the target which also gives a notion of the dimension of the target. Simulation results show that the multi static radar cross-section images of different extended target shapes obtained are different. The synthetic shapes derived for the targets are also distinct. This way of imaging the RCS and shape provides a unique representation of the target signatures thus, can be used as potential features for good target classification.","PeriodicalId":168311,"journal":{"name":"2023 IEEE Radar Conference (RadarConf23)","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121991412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Modular Multi-Channel RFSoC System Expansion and Array Design 模块化多通道RFSoC系统扩展与阵列设计
2023 IEEE Radar Conference (RadarConf23) Pub Date : 2023-05-01 DOI: 10.1109/RadarConf2351548.2023.10149783
N. Peters, C. Horne, Amin D. Amiri, Piers J. Beasley, M. Ritchie
{"title":"Modular Multi-Channel RFSoC System Expansion and Array Design","authors":"N. Peters, C. Horne, Amin D. Amiri, Piers J. Beasley, M. Ritchie","doi":"10.1109/RadarConf2351548.2023.10149783","DOIUrl":"https://doi.org/10.1109/RadarConf2351548.2023.10149783","url":null,"abstract":"Radio Frequency (RF) sensors are often designed to operate in a single mode or configuration. Demands coming from operating in future challenging Electromagnetic Environment (EM) conditions require innovative solutions and significant changes from current radar architectures. This paper provides a system level review of a modular multi-function RF sensor solution which allows for a N node solution which can be either used to drive a singular powerful array solution OR deployed as N multistatic RF sensor nodes. Both solutions use a common digital solution which is based on the Xilinx Radio Frequency System on a Chip (RFSoC) technology. An antenna array operating at C-band has been designed for the project, along with daughter-boards which facilitate access to all 8 receive channels from the Xilinx ZCU111 RFSoC development board. A solution to the challenges of synchronising the ADC channels (including across multiple ZCU111 boards) is also presented, with results showing the synchronisation performance.","PeriodicalId":168311,"journal":{"name":"2023 IEEE Radar Conference (RadarConf23)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129769779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CFAR-guided Convolution Neural Network for Large Scale Scene SAR Ship Detection 基于cfar制导的卷积神经网络的大规模场景SAR舰船检测
2023 IEEE Radar Conference (RadarConf23) Pub Date : 2023-05-01 DOI: 10.1109/RadarConf2351548.2023.10149747
Zikang Shao, Xiaoling Zhang, Xiaowo Xu, Tianjiao Zeng, Tianwen Zhang, Jun Shi
{"title":"CFAR-guided Convolution Neural Network for Large Scale Scene SAR Ship Detection","authors":"Zikang Shao, Xiaoling Zhang, Xiaowo Xu, Tianjiao Zeng, Tianwen Zhang, Jun Shi","doi":"10.1109/RadarConf2351548.2023.10149747","DOIUrl":"https://doi.org/10.1109/RadarConf2351548.2023.10149747","url":null,"abstract":"Ship target detection in large scene synthetic aperture radar (SAR) image is a very challenging work. Compared with traditional constant false alarm rate (CFAR) detector, detectors based on convolution neural networks (CNNs) perform better. However, there are still two defects ‐1) Small ship targets make it hard to extract ship features, and 2) Totally abandon traditional methods leads to the increasement of positioning-risk. In order to solve these problems, we propose a SAR ship detection network which combines CFAR and CNN, called CFAR-guided Convolution Neural Network (CG-CNN). CG-CNN realizes the fusion of CFAR and CNN at the original image level and feature level, and enhances the guiding role of CFAR detection for CNN detection. Detection results on Large-Scale SAR Ship Detection Dataset-v1.0 show that CG-CNN has the best detection performance.","PeriodicalId":168311,"journal":{"name":"2023 IEEE Radar Conference (RadarConf23)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129169438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cramér-Rao Lower Bound and Estimation Algorithms For Scene-based Bistatic Radar Waveform Estimation 基于场景的双基地雷达波形估计的cram<s:1> - rao下界和估计算法
2023 IEEE Radar Conference (RadarConf23) Pub Date : 2023-05-01 DOI: 10.1109/RadarConf2351548.2023.10149689
M. Coutiño, A. M. Sardarabadi, P. Cox, W. V. van Rossum, L. Anitori
{"title":"Cramér-Rao Lower Bound and Estimation Algorithms For Scene-based Bistatic Radar Waveform Estimation","authors":"M. Coutiño, A. M. Sardarabadi, P. Cox, W. V. van Rossum, L. Anitori","doi":"10.1109/RadarConf2351548.2023.10149689","DOIUrl":"https://doi.org/10.1109/RadarConf2351548.2023.10149689","url":null,"abstract":"Cooperative radar operations typically rely on the exchange of a limited amount of information to improve the quality of the estimated targets parameters. Unfortunately, in many instances, not all necessary information can be accessed or communicated, e.g., no line of sight (LOS) or limited resources. This problem is exacerbated with the inset of novel (irregular) waveforms, exhibiting large number of degrees of freedom, on transmit. For example, where both monostatic and bistatic measurements are available, enhanced parameter estimation can be achieved through sharing only the synchronization and geographical information between two platforms. In this paper, we focus on this scenario and derive the Cramer- Rao lower bound for the estimation of the unknown bistatic waveform under no-LOS mild assumptions on the second-order statistic of the bistatic and monostatic returns. Also, we devise a set of algorithms exploiting the monostatic estimated scene, based on spectral methods, factor analysis and calibration techniques. Through numerical experiments, we compare the performance and discuss the limitations of the introduced techniques.","PeriodicalId":168311,"journal":{"name":"2023 IEEE Radar Conference (RadarConf23)","volume":"221 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123359977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信