{"title":"Comparison of performance measures for intercept detectors","authors":"R. Rifkin","doi":"10.1109/TCC.1994.472091","DOIUrl":null,"url":null,"abstract":"The performance of intercept detectors is generally analyzed using the Neyman-Pearson criterion, which evaluates the probability of detection, P/sub d/, as a function of input signal-to-noise ratio (SNR) at a particular probability of false alarm, P/sub fa/. While the Neyman-Pearson criterion closely represents typical system requirements, it is often difficult to evaluate, and obscures important effects in tradeoff analyses. Consequently other performance measures, such as output SNR or the deflection, are often used instead. While the output SNR is useful for evaluating and optimizing a particular detector class, it may be inappropriate when applied to different detector classes. This paper examines the output SNR and probability of detection, for three intercept detectors: the radiometer, the chip rate detector, and the carrier harmonic detector (frequency doubler). The analysis makes the following assumptions: a direct-sequence pseudo-noise spread spectrum signal of interest additive stationary white Gaussian-distributed noise background with fixed known power (i.e., no noise power uncertainty) known power spectral density of the signal of interest input SNR much less than unity observation time much greater than the chip duration. The analysis highlights the danger of blindly evaluating detector performance solely by output SNR. Specific examples are identified in which the radiometer's performance, using the Neyman-Pearson criterion, is inferior to that of the cyclostationary feature detectors operating at equal output SNR for the decision variable.<<ETX>>","PeriodicalId":206310,"journal":{"name":"Proceedings of TCC'94 - Tactical Communications Conference","volume":"93 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1994-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of TCC'94 - Tactical Communications Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TCC.1994.472091","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
The performance of intercept detectors is generally analyzed using the Neyman-Pearson criterion, which evaluates the probability of detection, P/sub d/, as a function of input signal-to-noise ratio (SNR) at a particular probability of false alarm, P/sub fa/. While the Neyman-Pearson criterion closely represents typical system requirements, it is often difficult to evaluate, and obscures important effects in tradeoff analyses. Consequently other performance measures, such as output SNR or the deflection, are often used instead. While the output SNR is useful for evaluating and optimizing a particular detector class, it may be inappropriate when applied to different detector classes. This paper examines the output SNR and probability of detection, for three intercept detectors: the radiometer, the chip rate detector, and the carrier harmonic detector (frequency doubler). The analysis makes the following assumptions: a direct-sequence pseudo-noise spread spectrum signal of interest additive stationary white Gaussian-distributed noise background with fixed known power (i.e., no noise power uncertainty) known power spectral density of the signal of interest input SNR much less than unity observation time much greater than the chip duration. The analysis highlights the danger of blindly evaluating detector performance solely by output SNR. Specific examples are identified in which the radiometer's performance, using the Neyman-Pearson criterion, is inferior to that of the cyclostationary feature detectors operating at equal output SNR for the decision variable.<>