Daniel White;Mohammed Jahangir;Amit Kumar Mishra;Chris J. Baker;Michail Antoniou
{"title":"Latent Variable and Classification Performance Analysis of Bird–Drone Spectrograms With Elementary Autoencoder","authors":"Daniel White;Mohammed Jahangir;Amit Kumar Mishra;Chris J. Baker;Michail Antoniou","doi":"10.1109/TRS.2024.3518842","DOIUrl":null,"url":null,"abstract":"Deep learning with convolutional neural networks (CNNs) has been widely utilized in radar research concerning automatic target recognition. Maximizing numerical metrics to gauge the performance of such algorithms does not necessarily correspond to model robustness against untested targets, nor does it lead to improved model interpretability. Approaches designed to explain the mechanisms behind the operation of a classifier on radar data are proliferating, but bring with them a significant computational and analysis overhead. This work uses an elementary unsupervised convolutional autoencoder (CAE) to learn a compressed representation of a challenging dataset of urban bird and drone targets, and subsequently if apparent, the quality of the representation via preservation of class labels leads to better classification performance after a separate supervised training stage. It is shown that a CAE that reduces the features output after each layer of the encoder gives rise to the best drone versus bird classifier. A clear connection between unsupervised evaluation via label preservation in the latent space and subsequent classification accuracy after supervised fine-tuning is shown, supporting further efforts to optimize radar data latent representations to enable optimal performance and model interpretability.","PeriodicalId":100645,"journal":{"name":"IEEE Transactions on Radar Systems","volume":"3 ","pages":"115-123"},"PeriodicalIF":0.0000,"publicationDate":"2024-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Radar Systems","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10804883/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Deep learning with convolutional neural networks (CNNs) has been widely utilized in radar research concerning automatic target recognition. Maximizing numerical metrics to gauge the performance of such algorithms does not necessarily correspond to model robustness against untested targets, nor does it lead to improved model interpretability. Approaches designed to explain the mechanisms behind the operation of a classifier on radar data are proliferating, but bring with them a significant computational and analysis overhead. This work uses an elementary unsupervised convolutional autoencoder (CAE) to learn a compressed representation of a challenging dataset of urban bird and drone targets, and subsequently if apparent, the quality of the representation via preservation of class labels leads to better classification performance after a separate supervised training stage. It is shown that a CAE that reduces the features output after each layer of the encoder gives rise to the best drone versus bird classifier. A clear connection between unsupervised evaluation via label preservation in the latent space and subsequent classification accuracy after supervised fine-tuning is shown, supporting further efforts to optimize radar data latent representations to enable optimal performance and model interpretability.