Tom Bertalan, Felix Dietrich, Ioannis G Kevrekidis
{"title":"建立神经网络间等效性的转换:两个网络何时学习了相同的任务?","authors":"Tom Bertalan, Felix Dietrich, Ioannis G Kevrekidis","doi":"10.1063/5.0206406","DOIUrl":null,"url":null,"abstract":"<p><p>Transformations are a key tool in the qualitative study of dynamical systems: transformations to a normal form, for example, underpin the study of instabilities and bifurcations. In this work, we test, and when possible establish, an equivalence between two different artificial neural networks by attempting to construct a data-driven transformation between them, using diffusion maps with a Mahalanobis-like metric. If the construction succeeds, the two networks can be thought of as belonging to the same equivalence class. We first discuss transformation functions between only the outputs of the two networks; we then also consider transformations that take into account outputs (activations) of a number of internal neurons from each network. Whitney's theorem dictates the number of (generic) measurements from one of the networks required to reconstruct each and every feature of the second network. The construction of the transformation function relies on a consistent, intrinsic representation of the network input space. We illustrate our algorithm by matching neural network pairs trained to learn (a) observations of scalar functions, (b) observations of two-dimensional vector fields, and (c) representations of images of a moving three-dimensional object (a rotating horse). We also demonstrate reconstruction of a network's input (and output) from minimal partial observations of intermediate neuron activations. The construction of equivalences across different network instantiations clearly relates to transfer learning and will also be valuable in establishing equivalence between different machine learning-based tools.</p>","PeriodicalId":2,"journal":{"name":"ACS Applied Bio Materials","volume":null,"pages":null},"PeriodicalIF":4.6000,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Transformations establishing equivalence across neural networks: When have two networks learned the same task?\",\"authors\":\"Tom Bertalan, Felix Dietrich, Ioannis G Kevrekidis\",\"doi\":\"10.1063/5.0206406\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Transformations are a key tool in the qualitative study of dynamical systems: transformations to a normal form, for example, underpin the study of instabilities and bifurcations. In this work, we test, and when possible establish, an equivalence between two different artificial neural networks by attempting to construct a data-driven transformation between them, using diffusion maps with a Mahalanobis-like metric. If the construction succeeds, the two networks can be thought of as belonging to the same equivalence class. We first discuss transformation functions between only the outputs of the two networks; we then also consider transformations that take into account outputs (activations) of a number of internal neurons from each network. Whitney's theorem dictates the number of (generic) measurements from one of the networks required to reconstruct each and every feature of the second network. The construction of the transformation function relies on a consistent, intrinsic representation of the network input space. We illustrate our algorithm by matching neural network pairs trained to learn (a) observations of scalar functions, (b) observations of two-dimensional vector fields, and (c) representations of images of a moving three-dimensional object (a rotating horse). We also demonstrate reconstruction of a network's input (and output) from minimal partial observations of intermediate neuron activations. The construction of equivalences across different network instantiations clearly relates to transfer learning and will also be valuable in establishing equivalence between different machine learning-based tools.</p>\",\"PeriodicalId\":2,\"journal\":{\"name\":\"ACS Applied Bio Materials\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":4.6000,\"publicationDate\":\"2024-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACS Applied Bio Materials\",\"FirstCategoryId\":\"100\",\"ListUrlMain\":\"https://doi.org/10.1063/5.0206406\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"MATERIALS SCIENCE, BIOMATERIALS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACS Applied Bio Materials","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1063/5.0206406","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MATERIALS SCIENCE, BIOMATERIALS","Score":null,"Total":0}
Transformations establishing equivalence across neural networks: When have two networks learned the same task?
Transformations are a key tool in the qualitative study of dynamical systems: transformations to a normal form, for example, underpin the study of instabilities and bifurcations. In this work, we test, and when possible establish, an equivalence between two different artificial neural networks by attempting to construct a data-driven transformation between them, using diffusion maps with a Mahalanobis-like metric. If the construction succeeds, the two networks can be thought of as belonging to the same equivalence class. We first discuss transformation functions between only the outputs of the two networks; we then also consider transformations that take into account outputs (activations) of a number of internal neurons from each network. Whitney's theorem dictates the number of (generic) measurements from one of the networks required to reconstruct each and every feature of the second network. The construction of the transformation function relies on a consistent, intrinsic representation of the network input space. We illustrate our algorithm by matching neural network pairs trained to learn (a) observations of scalar functions, (b) observations of two-dimensional vector fields, and (c) representations of images of a moving three-dimensional object (a rotating horse). We also demonstrate reconstruction of a network's input (and output) from minimal partial observations of intermediate neuron activations. The construction of equivalences across different network instantiations clearly relates to transfer learning and will also be valuable in establishing equivalence between different machine learning-based tools.