{"title":"A sample survey study of poly-semantic neurons in deep CNNs","authors":"Chang-Bin Zhang, Yue Wang","doi":"10.1117/12.2674650","DOIUrl":null,"url":null,"abstract":"Although deep CNN networks have excellent image classification performance, they do not provide interpretability, and furthermore existing work reveals that these models have complex internals, for example, mysterious polysemantic neurons activate to multiple features. In this work, we analyze the intermediate data of the network dissection paper made by Bau et al. to understand to what extent polysemantic neurons exist. We divide the polysemantic neurons into five types and calculate the percentage of each type by sampling. We find that above 50% neurons identify one concept but there are a quite proportion of neurons that recognize two or more features. This can explain the high classification accuracy and some capacity saving of a deep CNN. By case studies, we draw some conclusions and hypotheses: First, unlike the human visual system, a CNN cannot distinguish detailed features (metaphor: a CNN is like a nearsighted eye). Second, the reason that the CNN is prone to adversarial attacks may be partially due to the polysemantic neurons. Third, polysemantic neurons may partially explain why people wrongly visualize one thing as another in neuroscience.","PeriodicalId":286364,"journal":{"name":"Conference on Computer Graphics, Artificial Intelligence, and Data Processing","volume":"176 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Conference on Computer Graphics, Artificial Intelligence, and Data Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.2674650","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Although deep CNN networks have excellent image classification performance, they do not provide interpretability, and furthermore existing work reveals that these models have complex internals, for example, mysterious polysemantic neurons activate to multiple features. In this work, we analyze the intermediate data of the network dissection paper made by Bau et al. to understand to what extent polysemantic neurons exist. We divide the polysemantic neurons into five types and calculate the percentage of each type by sampling. We find that above 50% neurons identify one concept but there are a quite proportion of neurons that recognize two or more features. This can explain the high classification accuracy and some capacity saving of a deep CNN. By case studies, we draw some conclusions and hypotheses: First, unlike the human visual system, a CNN cannot distinguish detailed features (metaphor: a CNN is like a nearsighted eye). Second, the reason that the CNN is prone to adversarial attacks may be partially due to the polysemantic neurons. Third, polysemantic neurons may partially explain why people wrongly visualize one thing as another in neuroscience.