基于图的深度神经网络相似性

IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Zuohui Chen , Yao Lu , JinXuan Hu , Qi Xuan , Zhen Wang , Xiaoniu Yang
{"title":"基于图的深度神经网络相似性","authors":"Zuohui Chen ,&nbsp;Yao Lu ,&nbsp;JinXuan Hu ,&nbsp;Qi Xuan ,&nbsp;Zhen Wang ,&nbsp;Xiaoniu Yang","doi":"10.1016/j.neucom.2024.128722","DOIUrl":null,"url":null,"abstract":"<div><div>Understanding the enigmatic black-box representations within Deep Neural Networks (DNNs) is an essential problem in the community of deep learning. An initial step towards tackling this conundrum lies in quantifying the degree of similarity between these representations. Various approaches have been proposed in prior research, however, as the field of representation similarity continues to develop, existing metrics are not compatible with each other and struggling to meet the evolving demands. To address this, we propose a comprehensive similarity measurement framework inspired by the natural graph structure formed by samples and their corresponding features within the neural network. Our novel Graph-Based Similarity (GBS) framework gauges the similarity of DNN representations by constructing a weighted, undirected graph based on the output of hidden layers. In this graph, each node represents an input sample, and the edges are weighted in accordance with the similarity between pairs of nodes. Consequently, the measure of representational similarity can be derived through graph similarity metrics, such as layer similarity. We observe that input samples belonging to the same category exhibit dense interconnections within the deep layers of the DNN. To quantify this phenomenon, we employ a motif-based approach to gauge the extent of these interconnections. This serves as a metric to evaluate whether the representation derived from one model can be accurately classified by another. Experimental results show that GBS gets state-of-the-art performance in the sanity check. We also extensively evaluate GBS on downstream tasks to demonstrate its effectiveness, including measuring the transferability of pretrained models and model pruning.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"614 ","pages":"Article 128722"},"PeriodicalIF":5.5000,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Graph-Based Similarity of Deep Neural Networks\",\"authors\":\"Zuohui Chen ,&nbsp;Yao Lu ,&nbsp;JinXuan Hu ,&nbsp;Qi Xuan ,&nbsp;Zhen Wang ,&nbsp;Xiaoniu Yang\",\"doi\":\"10.1016/j.neucom.2024.128722\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Understanding the enigmatic black-box representations within Deep Neural Networks (DNNs) is an essential problem in the community of deep learning. An initial step towards tackling this conundrum lies in quantifying the degree of similarity between these representations. Various approaches have been proposed in prior research, however, as the field of representation similarity continues to develop, existing metrics are not compatible with each other and struggling to meet the evolving demands. To address this, we propose a comprehensive similarity measurement framework inspired by the natural graph structure formed by samples and their corresponding features within the neural network. Our novel Graph-Based Similarity (GBS) framework gauges the similarity of DNN representations by constructing a weighted, undirected graph based on the output of hidden layers. In this graph, each node represents an input sample, and the edges are weighted in accordance with the similarity between pairs of nodes. Consequently, the measure of representational similarity can be derived through graph similarity metrics, such as layer similarity. We observe that input samples belonging to the same category exhibit dense interconnections within the deep layers of the DNN. To quantify this phenomenon, we employ a motif-based approach to gauge the extent of these interconnections. This serves as a metric to evaluate whether the representation derived from one model can be accurately classified by another. Experimental results show that GBS gets state-of-the-art performance in the sanity check. We also extensively evaluate GBS on downstream tasks to demonstrate its effectiveness, including measuring the transferability of pretrained models and model pruning.</div></div>\",\"PeriodicalId\":19268,\"journal\":{\"name\":\"Neurocomputing\",\"volume\":\"614 \",\"pages\":\"Article 128722\"},\"PeriodicalIF\":5.5000,\"publicationDate\":\"2024-10-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neurocomputing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0925231224014930\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231224014930","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

理解深度神经网络(DNN)中神秘的黑盒子表征是深度学习领域的一个重要问题。解决这一难题的第一步在于量化这些表征之间的相似程度。之前的研究提出了多种方法,但是,随着表征相似性领域的不断发展,现有的度量标准彼此不兼容,难以满足不断变化的需求。为了解决这个问题,我们提出了一个全面的相似性测量框架,其灵感来自样本及其相应特征在神经网络中形成的自然图结构。我们新颖的基于图的相似性(GBS)框架通过基于隐藏层的输出构建加权无向图来衡量 DNN 表示的相似性。在该图中,每个节点代表一个输入样本,而边缘则根据节点对之间的相似性进行加权。因此,表征相似性的度量可通过图相似性度量(如层相似性)得出。我们观察到,属于同一类别的输入样本在 DNN 的深层中表现出密集的相互联系。为了量化这一现象,我们采用了一种基于图案的方法来衡量这些相互连接的程度。这可以作为一种衡量标准,用来评估从一个模型中得出的表示是否能被另一个模型准确分类。实验结果表明,GBS 在正确性检查中取得了最先进的性能。我们还在下游任务中对 GBS 进行了广泛评估,以证明其有效性,包括测量预训练模型的可转移性和模型剪枝。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Graph-Based Similarity of Deep Neural Networks
Understanding the enigmatic black-box representations within Deep Neural Networks (DNNs) is an essential problem in the community of deep learning. An initial step towards tackling this conundrum lies in quantifying the degree of similarity between these representations. Various approaches have been proposed in prior research, however, as the field of representation similarity continues to develop, existing metrics are not compatible with each other and struggling to meet the evolving demands. To address this, we propose a comprehensive similarity measurement framework inspired by the natural graph structure formed by samples and their corresponding features within the neural network. Our novel Graph-Based Similarity (GBS) framework gauges the similarity of DNN representations by constructing a weighted, undirected graph based on the output of hidden layers. In this graph, each node represents an input sample, and the edges are weighted in accordance with the similarity between pairs of nodes. Consequently, the measure of representational similarity can be derived through graph similarity metrics, such as layer similarity. We observe that input samples belonging to the same category exhibit dense interconnections within the deep layers of the DNN. To quantify this phenomenon, we employ a motif-based approach to gauge the extent of these interconnections. This serves as a metric to evaluate whether the representation derived from one model can be accurately classified by another. Experimental results show that GBS gets state-of-the-art performance in the sanity check. We also extensively evaluate GBS on downstream tasks to demonstrate its effectiveness, including measuring the transferability of pretrained models and model pruning.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Neurocomputing
Neurocomputing 工程技术-计算机:人工智能
CiteScore
13.10
自引率
10.00%
发文量
1382
审稿时长
70 days
期刊介绍: Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信