{"title":"Reinforcement Learning Discovers Efficient Decentralized Graph Path Search Strategies","authors":"Alexei Pisacane, Victor-Alexandru Darvariu, Mirco Musolesi","doi":"arxiv-2409.07932","DOIUrl":null,"url":null,"abstract":"Graph path search is a classic computer science problem that has been\nrecently approached with Reinforcement Learning (RL) due to its potential to\noutperform prior methods. Existing RL techniques typically assume a global view\nof the network, which is not suitable for large-scale, dynamic, and\nprivacy-sensitive settings. An area of particular interest is search in social\nnetworks due to its numerous applications. Inspired by seminal work in\nexperimental sociology, which showed that decentralized yet efficient search is\npossible in social networks, we frame the problem as a collaborative task\nbetween multiple agents equipped with a limited local view of the network. We\npropose a multi-agent approach for graph path search that successfully\nleverages both homophily and structural heterogeneity. Our experiments, carried\nout over synthetic and real-world social networks, demonstrate that our model\nsignificantly outperforms learned and heuristic baselines. Furthermore, our\nresults show that meaningful embeddings for graph navigation can be constructed\nusing reward-driven learning.","PeriodicalId":501032,"journal":{"name":"arXiv - CS - Social and Information Networks","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Social and Information Networks","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.07932","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Graph path search is a classic computer science problem that has been
recently approached with Reinforcement Learning (RL) due to its potential to
outperform prior methods. Existing RL techniques typically assume a global view
of the network, which is not suitable for large-scale, dynamic, and
privacy-sensitive settings. An area of particular interest is search in social
networks due to its numerous applications. Inspired by seminal work in
experimental sociology, which showed that decentralized yet efficient search is
possible in social networks, we frame the problem as a collaborative task
between multiple agents equipped with a limited local view of the network. We
propose a multi-agent approach for graph path search that successfully
leverages both homophily and structural heterogeneity. Our experiments, carried
out over synthetic and real-world social networks, demonstrate that our model
significantly outperforms learned and heuristic baselines. Furthermore, our
results show that meaningful embeddings for graph navigation can be constructed
using reward-driven learning.