{"title":"利用可解释性:基于概念的深度神经网络行人检测","authors":"P. Feifel, Frank Bonarens, F. Köster","doi":"10.1145/3488904.3493379","DOIUrl":null,"url":null,"abstract":"The automation of driving systems relies on proof of the correct functioning of perception. Arguing the safety of deep neural networks (DNNs) must involve quantifiable evidence. Currently, the application of DNNs suffers from an incomprehensible behavior. It is still an open question if post-hoc methods mitigate the safety concerns of trained DNNs. Our work proposes a method for inherently interpretable and concept-based pedestrian detection (CPD). CPD explicitly structures the latent space with concept vectors that learn features for body parts as predefined concepts. The distance-based clustering and separation of latent representations build an interpretable reasoning process. Hence, CPD predicts a body part segmentation based on distances of latent representations to concept vectors. A non-interpretable 2d bounding box prediction for pedestrians complements the segmentation. The proposed CPD generates additional information that can be of great value in a safety argumentation of a DNN for pedestrian detection. We report competitive performance for the task of pedestrian detection. Finally, CPD enables concept-based tests to quantify evidence of a safe perception in automated driving systems.","PeriodicalId":332312,"journal":{"name":"Proceedings of the 5th ACM Computer Science in Cars Symposium","volume":"109 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Leveraging Interpretability: Concept-based Pedestrian Detection with Deep Neural Networks\",\"authors\":\"P. Feifel, Frank Bonarens, F. Köster\",\"doi\":\"10.1145/3488904.3493379\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The automation of driving systems relies on proof of the correct functioning of perception. Arguing the safety of deep neural networks (DNNs) must involve quantifiable evidence. Currently, the application of DNNs suffers from an incomprehensible behavior. It is still an open question if post-hoc methods mitigate the safety concerns of trained DNNs. Our work proposes a method for inherently interpretable and concept-based pedestrian detection (CPD). CPD explicitly structures the latent space with concept vectors that learn features for body parts as predefined concepts. The distance-based clustering and separation of latent representations build an interpretable reasoning process. Hence, CPD predicts a body part segmentation based on distances of latent representations to concept vectors. A non-interpretable 2d bounding box prediction for pedestrians complements the segmentation. The proposed CPD generates additional information that can be of great value in a safety argumentation of a DNN for pedestrian detection. We report competitive performance for the task of pedestrian detection. Finally, CPD enables concept-based tests to quantify evidence of a safe perception in automated driving systems.\",\"PeriodicalId\":332312,\"journal\":{\"name\":\"Proceedings of the 5th ACM Computer Science in Cars Symposium\",\"volume\":\"109 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-11-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 5th ACM Computer Science in Cars Symposium\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3488904.3493379\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 5th ACM Computer Science in Cars Symposium","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3488904.3493379","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Leveraging Interpretability: Concept-based Pedestrian Detection with Deep Neural Networks
The automation of driving systems relies on proof of the correct functioning of perception. Arguing the safety of deep neural networks (DNNs) must involve quantifiable evidence. Currently, the application of DNNs suffers from an incomprehensible behavior. It is still an open question if post-hoc methods mitigate the safety concerns of trained DNNs. Our work proposes a method for inherently interpretable and concept-based pedestrian detection (CPD). CPD explicitly structures the latent space with concept vectors that learn features for body parts as predefined concepts. The distance-based clustering and separation of latent representations build an interpretable reasoning process. Hence, CPD predicts a body part segmentation based on distances of latent representations to concept vectors. A non-interpretable 2d bounding box prediction for pedestrians complements the segmentation. The proposed CPD generates additional information that can be of great value in a safety argumentation of a DNN for pedestrian detection. We report competitive performance for the task of pedestrian detection. Finally, CPD enables concept-based tests to quantify evidence of a safe perception in automated driving systems.