Autonomous Agents and Multi-Agent Systems最新文献

筛选
英文 中文
Landmark-based distributed topological mapping and navigation in GPS-denied urban environments using teams of low-cost robots 利用低成本机器人团队在 GPS 信号缺失的城市环境中进行基于地标的分布式拓扑制图和导航
IF 2 3区 计算机科学
Autonomous Agents and Multi-Agent Systems Pub Date : 2024-02-19 DOI: 10.1007/s10458-024-09635-y
Mohammad Saleh Teymouri, Subhrajit Bhattacharya
{"title":"Landmark-based distributed topological mapping and navigation in GPS-denied urban environments using teams of low-cost robots","authors":"Mohammad Saleh Teymouri,&nbsp;Subhrajit Bhattacharya","doi":"10.1007/s10458-024-09635-y","DOIUrl":"10.1007/s10458-024-09635-y","url":null,"abstract":"<div><p>In this paper, we address the problem of autonomous multi-robot mapping, exploration and navigation in unknown, GPS-denied indoor or urban environments using a team of robots equipped with directional sensors with limited sensing capabilities and limited computational resources. The robots have no a priori knowledge of the environment and need to rapidly explore and construct a map in a distributed manner using existing landmarks, the presence of which can be detected using onboard senors, although little to no metric information (distance or bearing to the landmarks) is available. In order to correctly and effectively achieve this, the presence of a necessary density/distribution of landmarks is ensured by design of the urban/indoor environment. We thus address this problem in two phases: (1) During the design/construction of the urban/indoor environment we can ensure that sufficient landmarks are placed within the environment. To that end we develop a <i>filtration</i>-based approach for designing strategic placement of landmarks in an environment. (2) We develop a distributed algorithm which a team of robots, with no a priori knowledge of the environment, can use to explore such an environment, construct a topological map requiring no metric/distance information, and use that map to navigate within the environment. This is achieved using a topological representation of the environment (called a <i>Landmark Complex</i>), instead of constructing a complete metric/pixel map. The representation is built by the robot as well as used by them for navigation through a balanced strategy involving exploration and exploitation. We use tools from homology theory for identifying “<i>holes</i>” in the coverage/exploration of the unknown environment and hence guide the robots towards achieving a complete exploration and mapping of the environment. Our simulation results demonstrate the effectiveness of the proposed metric-free topological (simplicial complex) representation in achieving exploration, localization and navigation within the environment.</p></div>","PeriodicalId":55586,"journal":{"name":"Autonomous Agents and Multi-Agent Systems","volume":"38 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10458-024-09635-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139910530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards interactive explanation-based nutrition virtual coaching systems 开发基于解释的互动式营养虚拟辅导系统
IF 2 3区 计算机科学
Autonomous Agents and Multi-Agent Systems Pub Date : 2024-01-20 DOI: 10.1007/s10458-023-09634-5
Berk Buzcu, Melissa Tessa, Igor Tchappi, Amro Najjar, Joris Hulstijn, Davide Calvaresi, Reyhan Aydoğan
{"title":"Towards interactive explanation-based nutrition virtual coaching systems","authors":"Berk Buzcu,&nbsp;Melissa Tessa,&nbsp;Igor Tchappi,&nbsp;Amro Najjar,&nbsp;Joris Hulstijn,&nbsp;Davide Calvaresi,&nbsp;Reyhan Aydoğan","doi":"10.1007/s10458-023-09634-5","DOIUrl":"10.1007/s10458-023-09634-5","url":null,"abstract":"<div><p>The awareness about healthy lifestyles is increasing, opening to personalized intelligent health coaching applications. A demand for more than mere suggestions and mechanistic interactions has driven attention to nutrition virtual coaching systems (NVC) as a bridge between human–machine interaction and recommender, informative, persuasive, and argumentation systems. NVC can rely on data-driven opaque mechanisms. Therefore, it is crucial to enable NVC to explain their doing (i.e., engaging the user in discussions (via arguments) about dietary solutions/alternatives). By doing so, transparency, user acceptance, and engagement are expected to be boosted. This study focuses on NVC agents generating personalized food recommendations based on user-specific factors such as allergies, eating habits, lifestyles, and ingredient preferences. In particular, we propose a user-agent negotiation process entailing run-time feedback mechanisms to react to both recommendations and related explanations. Lastly, the study presents the findings obtained by the experiments conducted with multi-background participants to evaluate the acceptability and effectiveness of the proposed system. The results indicate that most participants value the opportunity to provide feedback and receive explanations for recommendations. Additionally, the users are fond of receiving information tailored to their needs. Furthermore, our interactive recommendation system performed better than the corresponding traditional recommendation system in terms of effectiveness regarding the number of agreements and rounds.</p></div>","PeriodicalId":55586,"journal":{"name":"Autonomous Agents and Multi-Agent Systems","volume":"38 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10458-023-09634-5.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139506539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A survey of multi-agent deep reinforcement learning with communication 具有通信功能的多代理深度强化学习调查
IF 1.9 3区 计算机科学
Autonomous Agents and Multi-Agent Systems Pub Date : 2024-01-06 DOI: 10.1007/s10458-023-09633-6
Changxi Zhu, Mehdi Dastani, Shihan Wang
{"title":"A survey of multi-agent deep reinforcement learning with communication","authors":"Changxi Zhu,&nbsp;Mehdi Dastani,&nbsp;Shihan Wang","doi":"10.1007/s10458-023-09633-6","DOIUrl":"10.1007/s10458-023-09633-6","url":null,"abstract":"<div><p>Communication is an effective mechanism for coordinating the behaviors of multiple agents, broadening their views of the environment, and to support their collaborations. In the field of multi-agent deep reinforcement learning (MADRL), agents can improve the overall learning performance and achieve their objectives by communication. Agents can communicate various types of messages, either to all agents or to specific agent groups, or conditioned on specific constraints. With the growing body of research work in MADRL with communication (Comm-MADRL), there is a lack of a systematic and structural approach to distinguish and classify existing Comm-MADRL approaches. In this paper, we survey recent works in the Comm-MADRL field and consider various aspects of communication that can play a role in designing and developing multi-agent reinforcement learning systems. With these aspects in mind, we propose 9 dimensions along which Comm-MADRL approaches can be analyzed, developed, and compared. By projecting existing works into the multi-dimensional space, we discover interesting trends. We also propose some novel directions for designing future Comm-MADRL systems through exploring possible combinations of the dimensions.</p></div>","PeriodicalId":55586,"journal":{"name":"Autonomous Agents and Multi-Agent Systems","volume":"38 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2024-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10458-023-09633-6.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139111866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IOB: integrating optimization transfer and behavior transfer for multi-policy reuse IOB:整合优化转移和行为转移,实现多政策重用
IF 1.9 3区 计算机科学
Autonomous Agents and Multi-Agent Systems Pub Date : 2023-12-09 DOI: 10.1007/s10458-023-09630-9
Siyuan Li, Hao Li, Jin Zhang, Zhen Wang, Peng Liu, Chongjie Zhang
{"title":"IOB: integrating optimization transfer and behavior transfer for multi-policy reuse","authors":"Siyuan Li,&nbsp;Hao Li,&nbsp;Jin Zhang,&nbsp;Zhen Wang,&nbsp;Peng Liu,&nbsp;Chongjie Zhang","doi":"10.1007/s10458-023-09630-9","DOIUrl":"10.1007/s10458-023-09630-9","url":null,"abstract":"<div><p>Humans have the ability to reuse previously learned policies to solve new tasks quickly, and reinforcement learning (RL) agents can do the same by transferring knowledge from source policies to a related target task. Transfer RL methods can reshape the policy optimization objective (optimization transfer) or influence the behavior policy (behavior transfer) using source policies. However, selecting the appropriate source policy with limited samples to guide target policy learning has been a challenge. Previous methods introduce additional components, such as hierarchical policies or estimations of source policies’ value functions, which can lead to non-stationary policy optimization or heavy sampling costs, diminishing transfer effectiveness. To address this challenge, we propose a novel transfer RL method that selects the source policy without training extra components. Our method utilizes the Q function in the actor-critic framework to guide policy selection, choosing the source policy with the largest one-step improvement over the current target policy. We integrate optimization transfer and behavior transfer (IOB) by regularizing the learned policy to mimic the guidance policy and combining them as the behavior policy. This integration significantly enhances transfer effectiveness, surpasses state-of-the-art transfer RL baselines in benchmark tasks, and improves final performance and knowledge transferability in continual learning scenarios. Additionally, we show that our optimization transfer technique is guaranteed to improve target policy learning.</p></div>","PeriodicalId":55586,"journal":{"name":"Autonomous Agents and Multi-Agent Systems","volume":"38 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2023-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138558138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Uniformly constrained reinforcement learning 统一约束强化学习
IF 1.9 3区 计算机科学
Autonomous Agents and Multi-Agent Systems Pub Date : 2023-12-06 DOI: 10.1007/s10458-023-09607-8
Jaeyoung Lee, Sean Sedwards, Krzysztof Czarnecki
{"title":"Uniformly constrained reinforcement learning","authors":"Jaeyoung Lee,&nbsp;Sean Sedwards,&nbsp;Krzysztof Czarnecki","doi":"10.1007/s10458-023-09607-8","DOIUrl":"10.1007/s10458-023-09607-8","url":null,"abstract":"<div><p>We propose new multi-objective reinforcement learning algorithms that aim to find a globally Pareto-optimal deterministic policy that uniformly (in all states) maximizes a reward subject to a uniform probabilistic constraint over reaching forbidden states of a Markov decision process. Our requirements arise naturally in the context of safety-critical systems, but pose a significant unmet challenge. This class of learning problem is known to be hard and there are no off-the-shelf solutions that fully address the combined requirements of determinism and uniform optimality. Having formalized our requirements and highlighted the specific challenge of learning instability, using a simple counterexample, we define from first principles a stable Bellman operator that we prove partially respects our requirements. This operator is therefore a partial solution to our problem, but produces conservative polices in comparison to our previous approach, which was not designed to satisfy the same requirements. We thus propose a relaxation of the stable operator, using <i>adaptive hysteresis</i>, that forms the basis of a heuristic approach that is stable w.r.t. our counterexample and learns policies that are less conservative than those of the stable operator and our previous algorithm. In comparison to our previous approach, the policies of our adaptive hysteresis algorithm demonstrate improved monotonicity with increasing constraint probabilities, which is one of the characteristics we desire. We demonstrate that adaptive hysteresis works well with dynamic programming and reinforcement learning, and can be adapted to function approximation.</p></div>","PeriodicalId":55586,"journal":{"name":"Autonomous Agents and Multi-Agent Systems","volume":"38 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2023-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138491279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Diffusion auction design with transaction costs 有交易成本的扩散拍卖设计
IF 1.9 3区 计算机科学
Autonomous Agents and Multi-Agent Systems Pub Date : 2023-12-06 DOI: 10.1007/s10458-023-09631-8
Bin Li, Dong Hao, Dengji Zhao
{"title":"Diffusion auction design with transaction costs","authors":"Bin Li,&nbsp;Dong Hao,&nbsp;Dengji Zhao","doi":"10.1007/s10458-023-09631-8","DOIUrl":"10.1007/s10458-023-09631-8","url":null,"abstract":"<div><p>We study multi-unit auctions powered by intermediated markets, where all transactions are processed by intermediaries and incur certain costs. Each intermediary in the market owns a private set of buyers and all intermediaries are networked with each other. Our goal is to incentivize the intermediaries to share the auction information to individuals they can reach, including their private buyers and neighboring intermediaries, so that more potential buyers are able to participate in the auction. To this end, we build a diffusion-based auction framework to handle the transaction costs and the strategic interactions between intermediaries. The classic Vickrey-Clarke-Groves (VCG) mechanism within the scenario can obtain the maximum social welfare, but it can decrease the seller’s revenue or even lead to a deficit. To overcome the revenue issue, we develop two deficit reduction strategies, based on which a family of diffusion auctions called Critical Neighborhood Auctions (CNA) is identified. The CNA not only maximizes the social welfare, but also eliminates all the seller’s deficits. Moreover, the revenue given by the CNA is no less than the revenue given by the VCG mechanism with/without intermediaries. This is the first set of diffusion auctions with welfare and revenue advantages that can handle multiple items and transaction costs.</p></div>","PeriodicalId":55586,"journal":{"name":"Autonomous Agents and Multi-Agent Systems","volume":"38 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2023-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138491306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RGS(^oplus ): RDF graph synchronization for collaborative robotics RGS (^oplus ):协作机器人的RDF图同步
IF 1.9 3区 计算机科学
Autonomous Agents and Multi-Agent Systems Pub Date : 2023-11-24 DOI: 10.1007/s10458-023-09629-2
Cyrille Berger, Patrick Doherty, Piotr Rudol, Mariusz Wzorek
{"title":"RGS(^oplus ): RDF graph synchronization for collaborative robotics","authors":"Cyrille Berger,&nbsp;Patrick Doherty,&nbsp;Piotr Rudol,&nbsp;Mariusz Wzorek","doi":"10.1007/s10458-023-09629-2","DOIUrl":"10.1007/s10458-023-09629-2","url":null,"abstract":"<div><p>In the context of collaborative robotics, distributed situation awareness is essential for supporting collective intelligence in teams of robots and human agents where it can be used for both individual and collective decision support. This is particularly important in applications pertaining to emergency rescue and crisis management. During operational missions, data and knowledge is gathered incrementally and in different ways by heterogeneous robots and humans. The purpose of this paper is to describe an RDF Graph Synchronization System called RGS<span>(^oplus )</span>. It is assumed that a dynamic set of agents provide or retrieve knowledge stored in their local RDF Graphs which are continuously synchronized between agents. The RGS<span>(^oplus )</span> System was designed to handle unreliable communication and does not rely on a static centralized infrastructure. It is capable of synchronizing knowledge as timely as possible and allows agents to access knowledge while it is incrementally acquired. A deeper empirical analysis of the RGS<span>(^oplus )</span> System is provided that shows both its efficiency and efficacy.</p></div>","PeriodicalId":55586,"journal":{"name":"Autonomous Agents and Multi-Agent Systems","volume":"37 2","pages":""},"PeriodicalIF":1.9,"publicationDate":"2023-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10458-023-09629-2.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138438341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A normative approach for resilient multiagent systems 弹性多智能体系统的规范方法
IF 1.9 3区 计算机科学
Autonomous Agents and Multi-Agent Systems Pub Date : 2023-11-13 DOI: 10.1007/s10458-023-09627-4
Geeta Mahala, Özgür Kafalı, Hoa Dam, Aditya Ghose, Munindar P. Singh
{"title":"A normative approach for resilient multiagent systems","authors":"Geeta Mahala,&nbsp;Özgür Kafalı,&nbsp;Hoa Dam,&nbsp;Aditya Ghose,&nbsp;Munindar P. Singh","doi":"10.1007/s10458-023-09627-4","DOIUrl":"10.1007/s10458-023-09627-4","url":null,"abstract":"<div><p>We model a multiagent system (MAS) in socio-technical terms, combining a social layer consisting of norms with a technical layer consisting of actions that the agents execute. This approach emphasizes autonomy, and makes assumptions about both the social and technical layers explicit. Autonomy means that agents may violate norms. In our approach, agents are computational entities, with each representing a different stakeholder. We express stakeholder requirements of the form that a MAS is resilient in that it can recover (sufficiently) from a failure within a (sufficiently short) duration. We present <span>ReNo</span>, a framework that computes probabilistic and temporal guarantees on whether the underlying requirements are met or, if failed, recovered. <span>ReNo</span> supports the refinement of the specification of a socio-technical system through methodological guidelines to meet the stated requirements. An important contribution of <span>ReNo</span> is that it shows how the social and technical layers can be modeled jointly to enable the construction of resilient systems of autonomous agents. We demonstrate <span>ReNo</span> using a manufacturing scenario with competing public, industrial, and environmental requirements.</p></div>","PeriodicalId":55586,"journal":{"name":"Autonomous Agents and Multi-Agent Systems","volume":"37 2","pages":""},"PeriodicalIF":1.9,"publicationDate":"2023-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134796590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ASN: action semantics network for multiagent reinforcement learning ASN:用于多智能体强化学习的动作语义网络
IF 1.9 3区 计算机科学
Autonomous Agents and Multi-Agent Systems Pub Date : 2023-11-10 DOI: 10.1007/s10458-023-09628-3
Tianpei Yang, Weixun Wang, Jianye Hao, Matthew E. Taylor, Yong Liu, Xiaotian Hao, Yujing Hu, Yingfeng Chen, Changjie Fan, Chunxu Ren, Ye Huang, Jiangcheng Zhu, Yang Gao
{"title":"ASN: action semantics network for multiagent reinforcement learning","authors":"Tianpei Yang,&nbsp;Weixun Wang,&nbsp;Jianye Hao,&nbsp;Matthew E. Taylor,&nbsp;Yong Liu,&nbsp;Xiaotian Hao,&nbsp;Yujing Hu,&nbsp;Yingfeng Chen,&nbsp;Changjie Fan,&nbsp;Chunxu Ren,&nbsp;Ye Huang,&nbsp;Jiangcheng Zhu,&nbsp;Yang Gao","doi":"10.1007/s10458-023-09628-3","DOIUrl":"10.1007/s10458-023-09628-3","url":null,"abstract":"<div><p>In multiagent systems (MASs), each agent makes individual decisions but all contribute globally to the system’s evolution. Learning in MASs is difficult since each agent’s selection of actions must take place in the presence of other co-learning agents. Moreover, the environmental stochasticity and uncertainties increase exponentially with the number of agents. Previous works borrow various multiagent coordination mechanisms for use in deep learning architectures to facilitate multiagent coordination. However, none of them explicitly consider that different actions can have different influence on other agents, which we call the action semantics. In this paper, we propose a novel network architecture, named Action Semantics Network (ASN), that explicitly represents such action semantics between agents. ASN characterizes different actions’ influence on other agents using neural networks based on the action semantics between them. ASN can be easily combined with existing deep reinforcement learning (DRL) algorithms to boost their performance. Experimental results on StarCraft II micromanagement and Neural MMO show that ASN significantly improves the performance of state-of-the-art DRL approaches, compared with several other network architectures. We also successfully deploy ASN to a popular online MMORPG game called Justice Online, which indicates a promising future for ASN to be applied in even more complex scenarios.</p></div>","PeriodicalId":55586,"journal":{"name":"Autonomous Agents and Multi-Agent Systems","volume":"37 2","pages":""},"PeriodicalIF":1.9,"publicationDate":"2023-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134796194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A family of strategyproof mechanisms for activity scheduling 一类活动调度的策略性机制
IF 1.9 3区 计算机科学
Autonomous Agents and Multi-Agent Systems Pub Date : 2023-10-20 DOI: 10.1007/s10458-023-09624-7
Xinping Xu, Jingwen Zhang, Minming Li, Lingjie Duan, Lihua Xie
{"title":"A family of strategyproof mechanisms for activity scheduling","authors":"Xinping Xu,&nbsp;Jingwen Zhang,&nbsp;Minming Li,&nbsp;Lingjie Duan,&nbsp;Lihua Xie","doi":"10.1007/s10458-023-09624-7","DOIUrl":"10.1007/s10458-023-09624-7","url":null,"abstract":"<div><p>Recent years have seen various designs of strategyproof mechanisms in the facility location game and the obnoxious facility game, by considering the facility’s geo-location as a point in the spatial domain. In this paper, we extend this point to be a continuous interval, and study a novel activity scheduling game to schedule an activity in the normalized time domain [0, 1] based on all agents’ time reports for preferences/conflicts. The activity starts at time point <i>y</i> and lasts for a fixed time period of <i>d</i> with <span>(0le dle 1)</span>. Each agent <span>(iin N = {1, cdots , n})</span> wants his preferred time interval <span>([t_i,t_i+l_i])</span> to be close to or overlap with the activity interval <span>([y,y+d])</span>. Since agents are heterogeneous, we consider each agent <i>i</i> has weight <span>(alpha _i)</span> or <span>(beta _i)</span> when the activity is scheduled after or before his time interval, respectively. Thus each agent <i>i</i>’s cost is his weight (<span>(alpha _i)</span> or <span>(beta _i)</span>) multiplied by the time difference between his time interval <span>([t_i,t_i+l_i])</span> and the activity interval <span>([y,y+d].)</span> The social cost is the summation of all agents’ costs. In this game, agents’ preferred time intervals <span>([t_i,t_i+l_i])</span>’s are private information and they may misreport such information to the social planner. Our objective is to choose the activity starting time <i>y</i> so that the mechanisms are strategyproof (i.e., all agents should be truthful to report <span>(t_i)</span>’s and <span>(l_i)</span>’s) and perform well with respect to minimizing the social cost. We design a mechanism outputting an optimal solution and prove that it is group strategyproof. For the objective of minimizing the maximum cost among agents, we design another strategyproof mechanism with the approximation ratio <span>(1+min {alpha /beta ,beta /alpha })</span> when <span>(alpha _i=alpha , beta _i = beta)</span> for <span>(iin N,)</span> and prove it is the best strategyproof mechanism. In the obnoxious activity scheduling game, each agent prefers his conflicting time interval <span>([t_i,t_i+l_i])</span> to be far away from the activity interval <span>([y,y+d])</span>. We design deterministic and randomized group strategyproof mechanisms, and compare their provable approximation ratios to the lower bounds. Finally, we consider the cost/utility of each agent as a 0-1 indicator function and find group strategyproof mechanisms for minimizing the social cost and maximizing the social utility.</p></div>","PeriodicalId":55586,"journal":{"name":"Autonomous Agents and Multi-Agent Systems","volume":"37 2","pages":""},"PeriodicalIF":1.9,"publicationDate":"2023-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50500785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信