{"title":"神经算法推理中的循环聚合器","authors":"Kaijia Xu, Petar Veličković","doi":"arxiv-2409.07154","DOIUrl":null,"url":null,"abstract":"Neural algorithmic reasoning (NAR) is an emerging field that seeks to design\nneural networks that mimic classical algorithmic computations. Today, graph\nneural networks (GNNs) are widely used in neural algorithmic reasoners due to\ntheir message passing framework and permutation equivariance. In this extended\nabstract, we challenge this design choice, and replace the equivariant\naggregation function with a recurrent neural network. While seemingly\ncounter-intuitive, this approach has appropriate grounding when nodes have a\nnatural ordering -- and this is the case frequently in established reasoning\nbenchmarks like CLRS-30. Indeed, our recurrent NAR (RNAR) model performs very\nstrongly on such tasks, while handling many others gracefully. A notable\nachievement of RNAR is its decisive state-of-the-art result on the Heapsort and\nQuickselect tasks, both deemed as a significant challenge for contemporary\nneural algorithmic reasoners -- especially the latter, where RNAR achieves a\nmean micro-F1 score of 87%.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Recurrent Aggregators in Neural Algorithmic Reasoning\",\"authors\":\"Kaijia Xu, Petar Veličković\",\"doi\":\"arxiv-2409.07154\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Neural algorithmic reasoning (NAR) is an emerging field that seeks to design\\nneural networks that mimic classical algorithmic computations. Today, graph\\nneural networks (GNNs) are widely used in neural algorithmic reasoners due to\\ntheir message passing framework and permutation equivariance. In this extended\\nabstract, we challenge this design choice, and replace the equivariant\\naggregation function with a recurrent neural network. While seemingly\\ncounter-intuitive, this approach has appropriate grounding when nodes have a\\nnatural ordering -- and this is the case frequently in established reasoning\\nbenchmarks like CLRS-30. Indeed, our recurrent NAR (RNAR) model performs very\\nstrongly on such tasks, while handling many others gracefully. A notable\\nachievement of RNAR is its decisive state-of-the-art result on the Heapsort and\\nQuickselect tasks, both deemed as a significant challenge for contemporary\\nneural algorithmic reasoners -- especially the latter, where RNAR achieves a\\nmean micro-F1 score of 87%.\",\"PeriodicalId\":501301,\"journal\":{\"name\":\"arXiv - CS - Machine Learning\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Machine Learning\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.07154\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.07154","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Recurrent Aggregators in Neural Algorithmic Reasoning
Neural algorithmic reasoning (NAR) is an emerging field that seeks to design
neural networks that mimic classical algorithmic computations. Today, graph
neural networks (GNNs) are widely used in neural algorithmic reasoners due to
their message passing framework and permutation equivariance. In this extended
abstract, we challenge this design choice, and replace the equivariant
aggregation function with a recurrent neural network. While seemingly
counter-intuitive, this approach has appropriate grounding when nodes have a
natural ordering -- and this is the case frequently in established reasoning
benchmarks like CLRS-30. Indeed, our recurrent NAR (RNAR) model performs very
strongly on such tasks, while handling many others gracefully. A notable
achievement of RNAR is its decisive state-of-the-art result on the Heapsort and
Quickselect tasks, both deemed as a significant challenge for contemporary
neural algorithmic reasoners -- especially the latter, where RNAR achieves a
mean micro-F1 score of 87%.