神经算法推理中的循环聚合器

Kaijia Xu, Petar Veličković
{"title":"神经算法推理中的循环聚合器","authors":"Kaijia Xu, Petar Veličković","doi":"arxiv-2409.07154","DOIUrl":null,"url":null,"abstract":"Neural algorithmic reasoning (NAR) is an emerging field that seeks to design\nneural networks that mimic classical algorithmic computations. Today, graph\nneural networks (GNNs) are widely used in neural algorithmic reasoners due to\ntheir message passing framework and permutation equivariance. In this extended\nabstract, we challenge this design choice, and replace the equivariant\naggregation function with a recurrent neural network. While seemingly\ncounter-intuitive, this approach has appropriate grounding when nodes have a\nnatural ordering -- and this is the case frequently in established reasoning\nbenchmarks like CLRS-30. Indeed, our recurrent NAR (RNAR) model performs very\nstrongly on such tasks, while handling many others gracefully. A notable\nachievement of RNAR is its decisive state-of-the-art result on the Heapsort and\nQuickselect tasks, both deemed as a significant challenge for contemporary\nneural algorithmic reasoners -- especially the latter, where RNAR achieves a\nmean micro-F1 score of 87%.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Recurrent Aggregators in Neural Algorithmic Reasoning\",\"authors\":\"Kaijia Xu, Petar Veličković\",\"doi\":\"arxiv-2409.07154\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Neural algorithmic reasoning (NAR) is an emerging field that seeks to design\\nneural networks that mimic classical algorithmic computations. Today, graph\\nneural networks (GNNs) are widely used in neural algorithmic reasoners due to\\ntheir message passing framework and permutation equivariance. In this extended\\nabstract, we challenge this design choice, and replace the equivariant\\naggregation function with a recurrent neural network. While seemingly\\ncounter-intuitive, this approach has appropriate grounding when nodes have a\\nnatural ordering -- and this is the case frequently in established reasoning\\nbenchmarks like CLRS-30. Indeed, our recurrent NAR (RNAR) model performs very\\nstrongly on such tasks, while handling many others gracefully. A notable\\nachievement of RNAR is its decisive state-of-the-art result on the Heapsort and\\nQuickselect tasks, both deemed as a significant challenge for contemporary\\nneural algorithmic reasoners -- especially the latter, where RNAR achieves a\\nmean micro-F1 score of 87%.\",\"PeriodicalId\":501301,\"journal\":{\"name\":\"arXiv - CS - Machine Learning\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Machine Learning\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.07154\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.07154","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

神经算法推理(NAR)是一个新兴领域,旨在设计能模拟经典算法计算的神经网络。如今,图神经网络(GNN)因其消息传递框架和包络等差性而被广泛应用于神经算法推理中。在这篇扩展摘要中,我们对这种设计选择提出了质疑,并用递归神经网络取代了等变聚集函数。虽然看似有违直觉,但当节点具有自然排序时,这种方法就有了适当的基础--在 CLRS-30 等成熟的推理基准中,这种情况经常出现。事实上,我们的递归 NAR(RNAR)模型在此类任务中表现非常出色,同时还能优雅地处理许多其他任务。RNAR 的一个显著成就是它在 Heapsort 和Quickselect 任务上取得了决定性的一流成绩,这两项任务都被认为是对当代神经算法推理器的重大挑战,尤其是后者,RNAR 的平均 micro-F1 得分为 87%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Recurrent Aggregators in Neural Algorithmic Reasoning
Neural algorithmic reasoning (NAR) is an emerging field that seeks to design neural networks that mimic classical algorithmic computations. Today, graph neural networks (GNNs) are widely used in neural algorithmic reasoners due to their message passing framework and permutation equivariance. In this extended abstract, we challenge this design choice, and replace the equivariant aggregation function with a recurrent neural network. While seemingly counter-intuitive, this approach has appropriate grounding when nodes have a natural ordering -- and this is the case frequently in established reasoning benchmarks like CLRS-30. Indeed, our recurrent NAR (RNAR) model performs very strongly on such tasks, while handling many others gracefully. A notable achievement of RNAR is its decisive state-of-the-art result on the Heapsort and Quickselect tasks, both deemed as a significant challenge for contemporary neural algorithmic reasoners -- especially the latter, where RNAR achieves a mean micro-F1 score of 87%.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信