{"title":"用图神经元胞自动机训练拓扑","authors":"Daniel Dwyer, Maxwell M. Omwenga","doi":"10.1109/eIT57321.2023.10187381","DOIUrl":null,"url":null,"abstract":"Graph neural cellular automata are a recently introduced class of computational models that extend neural cellular automata to arbitrary graphs. They are promising in various applications based on preliminary test results and the successes of related computational models, such as neural cellular automata and convolutional and graph neural networks. However, all previous graph neural cellular automaton implementations have only been able to modify data associated with the vertices and edges, not the underlying graph topology itself. Here we introduce a method of encoding graph topology information as vertex data by assigning each edge and vertex an opacity value, which is the confidence with which the model thinks that that edge or vertex should be present in the output graph. Graph neural cellular automata equipped with this encoding method, henceforth referred to as translucent graph neural cellular automata, were tested in their ability to learn to reconstruct graphs from random subgraphs of them as a proof of concept. The results suggest that translucent graph neural cellular automata are capable of this task, albeit with optimal learning rates highly dependent on the graph to be reconstructed.","PeriodicalId":113717,"journal":{"name":"2023 IEEE International Conference on Electro Information Technology (eIT)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Training Topology With Graph Neural Cellular Automata\",\"authors\":\"Daniel Dwyer, Maxwell M. Omwenga\",\"doi\":\"10.1109/eIT57321.2023.10187381\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Graph neural cellular automata are a recently introduced class of computational models that extend neural cellular automata to arbitrary graphs. They are promising in various applications based on preliminary test results and the successes of related computational models, such as neural cellular automata and convolutional and graph neural networks. However, all previous graph neural cellular automaton implementations have only been able to modify data associated with the vertices and edges, not the underlying graph topology itself. Here we introduce a method of encoding graph topology information as vertex data by assigning each edge and vertex an opacity value, which is the confidence with which the model thinks that that edge or vertex should be present in the output graph. Graph neural cellular automata equipped with this encoding method, henceforth referred to as translucent graph neural cellular automata, were tested in their ability to learn to reconstruct graphs from random subgraphs of them as a proof of concept. The results suggest that translucent graph neural cellular automata are capable of this task, albeit with optimal learning rates highly dependent on the graph to be reconstructed.\",\"PeriodicalId\":113717,\"journal\":{\"name\":\"2023 IEEE International Conference on Electro Information Technology (eIT)\",\"volume\":\"4 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-05-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IEEE International Conference on Electro Information Technology (eIT)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/eIT57321.2023.10187381\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE International Conference on Electro Information Technology (eIT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/eIT57321.2023.10187381","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Training Topology With Graph Neural Cellular Automata
Graph neural cellular automata are a recently introduced class of computational models that extend neural cellular automata to arbitrary graphs. They are promising in various applications based on preliminary test results and the successes of related computational models, such as neural cellular automata and convolutional and graph neural networks. However, all previous graph neural cellular automaton implementations have only been able to modify data associated with the vertices and edges, not the underlying graph topology itself. Here we introduce a method of encoding graph topology information as vertex data by assigning each edge and vertex an opacity value, which is the confidence with which the model thinks that that edge or vertex should be present in the output graph. Graph neural cellular automata equipped with this encoding method, henceforth referred to as translucent graph neural cellular automata, were tested in their ability to learn to reconstruct graphs from random subgraphs of them as a proof of concept. The results suggest that translucent graph neural cellular automata are capable of this task, albeit with optimal learning rates highly dependent on the graph to be reconstructed.