Reinforcement Learning Discovers Efficient Decentralized Graph Path Search Strategies

Alexei Pisacane, Victor-Alexandru Darvariu, Mirco Musolesi
{"title":"Reinforcement Learning Discovers Efficient Decentralized Graph Path Search Strategies","authors":"Alexei Pisacane, Victor-Alexandru Darvariu, Mirco Musolesi","doi":"arxiv-2409.07932","DOIUrl":null,"url":null,"abstract":"Graph path search is a classic computer science problem that has been\nrecently approached with Reinforcement Learning (RL) due to its potential to\noutperform prior methods. Existing RL techniques typically assume a global view\nof the network, which is not suitable for large-scale, dynamic, and\nprivacy-sensitive settings. An area of particular interest is search in social\nnetworks due to its numerous applications. Inspired by seminal work in\nexperimental sociology, which showed that decentralized yet efficient search is\npossible in social networks, we frame the problem as a collaborative task\nbetween multiple agents equipped with a limited local view of the network. We\npropose a multi-agent approach for graph path search that successfully\nleverages both homophily and structural heterogeneity. Our experiments, carried\nout over synthetic and real-world social networks, demonstrate that our model\nsignificantly outperforms learned and heuristic baselines. Furthermore, our\nresults show that meaningful embeddings for graph navigation can be constructed\nusing reward-driven learning.","PeriodicalId":501032,"journal":{"name":"arXiv - CS - Social and Information Networks","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Social and Information Networks","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.07932","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Graph path search is a classic computer science problem that has been recently approached with Reinforcement Learning (RL) due to its potential to outperform prior methods. Existing RL techniques typically assume a global view of the network, which is not suitable for large-scale, dynamic, and privacy-sensitive settings. An area of particular interest is search in social networks due to its numerous applications. Inspired by seminal work in experimental sociology, which showed that decentralized yet efficient search is possible in social networks, we frame the problem as a collaborative task between multiple agents equipped with a limited local view of the network. We propose a multi-agent approach for graph path search that successfully leverages both homophily and structural heterogeneity. Our experiments, carried out over synthetic and real-world social networks, demonstrate that our model significantly outperforms learned and heuristic baselines. Furthermore, our results show that meaningful embeddings for graph navigation can be constructed using reward-driven learning.
强化学习发现高效的分散图路径搜索策略
图路径搜索是一个经典的计算机科学问题,由于它有可能优于之前的方法,人们最近开始用强化学习(RL)来解决这个问题。现有的 RL 技术通常假设网络是全局的,这不适合大规模、动态和对隐私敏感的环境。社交网络中的搜索因其应用广泛而备受关注。实验社会学的开创性工作表明,在社交网络中可以进行分散而高效的搜索,受此启发,我们将这一问题归结为多个代理之间的协作任务,这些代理配备了有限的网络局部视图。我们提出了一种多代理图路径搜索方法,它能成功地利用同质性和结构异质性。我们在合成和真实世界社交网络上进行的实验表明,我们的模型明显优于学习模型和启发式基线模型。此外,我们的研究结果表明,利用奖励驱动学习可以为图导航构建有意义的嵌入。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信