{"title":"Generating Counterfactual Explanations for Misclassification of Automotive Radar Targets","authors":"Neeraj Pandey;Devansh Mathur;Debojyoti Sarkar;Shobha Sundar Ram","doi":"10.1109/TRS.2025.3566222","DOIUrl":null,"url":null,"abstract":"Prior studies have demonstrated that the inverse synthetic aperture radar (ISAR) images of automotive targets at millimeter-wave (mmW) frequencies provide useful information regarding the target’s shape, size, and trajectory and serve as excellent classification features for deep neural networks. However, the classification performance is limited by environmental conditions, such as multipath, clutter, and occlusion, even when the radar receivers have a high signal-to-noise ratio (SNR). Therefore, for the widespread adoption of deep learning-based ISAR classification in real-world advanced driver assistance systems (ADASs), it is essential to provide a framework for explaining the physics-based phenomena responsible for misclassification and building trust among end users. In this work, we use the deep learning-based generative framework that introduces minimal perturbations on ISAR images belonging to one class to synthesize counterfactual realistic ISAR images that are misclassified as belonging to a second class of automotive vehicles. The networks are specifically trained to emulate occlusions of parts of the target vehicles from the radar. Due to the requirement of controlled experiments for occluding specific parts of the vehicle, simulation radar data are adopted to generate ISAR images. Our results show that the analyses of the counterfactual images generated through this process provide valuable insights into the physics-based causes of misclassification.","PeriodicalId":100645,"journal":{"name":"IEEE Transactions on Radar Systems","volume":"3 ","pages":"724-737"},"PeriodicalIF":0.0000,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Radar Systems","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10981818/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Prior studies have demonstrated that the inverse synthetic aperture radar (ISAR) images of automotive targets at millimeter-wave (mmW) frequencies provide useful information regarding the target’s shape, size, and trajectory and serve as excellent classification features for deep neural networks. However, the classification performance is limited by environmental conditions, such as multipath, clutter, and occlusion, even when the radar receivers have a high signal-to-noise ratio (SNR). Therefore, for the widespread adoption of deep learning-based ISAR classification in real-world advanced driver assistance systems (ADASs), it is essential to provide a framework for explaining the physics-based phenomena responsible for misclassification and building trust among end users. In this work, we use the deep learning-based generative framework that introduces minimal perturbations on ISAR images belonging to one class to synthesize counterfactual realistic ISAR images that are misclassified as belonging to a second class of automotive vehicles. The networks are specifically trained to emulate occlusions of parts of the target vehicles from the radar. Due to the requirement of controlled experiments for occluding specific parts of the vehicle, simulation radar data are adopted to generate ISAR images. Our results show that the analyses of the counterfactual images generated through this process provide valuable insights into the physics-based causes of misclassification.