{"title":"在人再识别网络中实现稳健的解释偏差","authors":"Esube Bekele, W. Lawson, Z. Horne, S. Khemlani","doi":"10.1109/CVPRW.2018.00291","DOIUrl":null,"url":null,"abstract":"Deep learning improved attributes recognition significantly in recent years. However, many of these networks remain \"black boxes\" and providing a meaningful explanation of their decisions is a major challenge. When these networks misidentify a person, they should be able to explain this mistake. The ability to generate explanations compelling enough to serve as useful accounts of the system's operations at a very high human-level is still in its infancy. In this paper, we utilize person re-identification (re-ID) networks as a platform to generate explanations. We propose and implement a framework that can be used to explain person re-ID using soft-biometric attributes. In particular, the resulting framework embodies a cognitively validated explanatory bias: people prefer and produce explanations that concern inherent properties instead of extrinsic influences. This bias is pervasive in that it affects the fitness of explanations across a broad swath of contexts, particularly those that concern conflicting or anomalous observations. To explain person re-ID, we developed a multiattribute residual network that treats a subset of its features as either inherent or extrinsic. Using these attributes, the system generates explanations based on inherent properties when the similarity of two input images is low, and it generates explanations based on extrinsic properties when the similarity is high. We argue that such a framework provides a blueprint for how to make the decisions of deep networks comprehensible to human operators. As an intermediate step, we demonstrate state-of-the-art attribute recognition performance on two pedestrian datasets (PETA and PA100K) and a face-based attribute dataset (CelebA). The VIPeR dataset is then used to generate explanations for re-ID with a network trained on PETA attributes.","PeriodicalId":150600,"journal":{"name":"2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)","volume":"177 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"Implementing a Robust Explanatory Bias in a Person Re-identification Network\",\"authors\":\"Esube Bekele, W. Lawson, Z. Horne, S. Khemlani\",\"doi\":\"10.1109/CVPRW.2018.00291\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep learning improved attributes recognition significantly in recent years. However, many of these networks remain \\\"black boxes\\\" and providing a meaningful explanation of their decisions is a major challenge. When these networks misidentify a person, they should be able to explain this mistake. The ability to generate explanations compelling enough to serve as useful accounts of the system's operations at a very high human-level is still in its infancy. In this paper, we utilize person re-identification (re-ID) networks as a platform to generate explanations. We propose and implement a framework that can be used to explain person re-ID using soft-biometric attributes. In particular, the resulting framework embodies a cognitively validated explanatory bias: people prefer and produce explanations that concern inherent properties instead of extrinsic influences. This bias is pervasive in that it affects the fitness of explanations across a broad swath of contexts, particularly those that concern conflicting or anomalous observations. To explain person re-ID, we developed a multiattribute residual network that treats a subset of its features as either inherent or extrinsic. Using these attributes, the system generates explanations based on inherent properties when the similarity of two input images is low, and it generates explanations based on extrinsic properties when the similarity is high. We argue that such a framework provides a blueprint for how to make the decisions of deep networks comprehensible to human operators. As an intermediate step, we demonstrate state-of-the-art attribute recognition performance on two pedestrian datasets (PETA and PA100K) and a face-based attribute dataset (CelebA). The VIPeR dataset is then used to generate explanations for re-ID with a network trained on PETA attributes.\",\"PeriodicalId\":150600,\"journal\":{\"name\":\"2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)\",\"volume\":\"177 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CVPRW.2018.00291\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CVPRW.2018.00291","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Implementing a Robust Explanatory Bias in a Person Re-identification Network
Deep learning improved attributes recognition significantly in recent years. However, many of these networks remain "black boxes" and providing a meaningful explanation of their decisions is a major challenge. When these networks misidentify a person, they should be able to explain this mistake. The ability to generate explanations compelling enough to serve as useful accounts of the system's operations at a very high human-level is still in its infancy. In this paper, we utilize person re-identification (re-ID) networks as a platform to generate explanations. We propose and implement a framework that can be used to explain person re-ID using soft-biometric attributes. In particular, the resulting framework embodies a cognitively validated explanatory bias: people prefer and produce explanations that concern inherent properties instead of extrinsic influences. This bias is pervasive in that it affects the fitness of explanations across a broad swath of contexts, particularly those that concern conflicting or anomalous observations. To explain person re-ID, we developed a multiattribute residual network that treats a subset of its features as either inherent or extrinsic. Using these attributes, the system generates explanations based on inherent properties when the similarity of two input images is low, and it generates explanations based on extrinsic properties when the similarity is high. We argue that such a framework provides a blueprint for how to make the decisions of deep networks comprehensible to human operators. As an intermediate step, we demonstrate state-of-the-art attribute recognition performance on two pedestrian datasets (PETA and PA100K) and a face-based attribute dataset (CelebA). The VIPeR dataset is then used to generate explanations for re-ID with a network trained on PETA attributes.