{"title":"Effects of individual decision schemes on group behavior","authors":"C. Barray","doi":"10.1109/ICMAS.1998.699227","DOIUrl":"https://doi.org/10.1109/ICMAS.1998.699227","url":null,"abstract":"The effects of a shared decision function on group behavior are studied. Previous work by others suggests that the amount of information available to the individuals plays a crucial role in group performance. This work extends the previous work to show that the control parameters in the decision rule utilized by the individuals significantly effects group behavior. This work shows that the decision rule influences the group behavior to a greater extent than the amount of information does. Indeed, a control scheme has been found that improves the performance of the system such that the behavior is no longer drastically affected by varying the amount of information available.","PeriodicalId":244857,"journal":{"name":"Proceedings International Conference on Multi Agent Systems (Cat. No.98EX160)","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121304844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Genetic encoding of agent behavioral strategy","authors":"Stéphane Calderoni, P. Marcenac, R. Courdier","doi":"10.1109/ICMAS.1998.699234","DOIUrl":"https://doi.org/10.1109/ICMAS.1998.699234","url":null,"abstract":"The general framework tackled in this paper is the automatic generation of intelligent collective behaviors using genetic programming and reinforcement teaming. We define a behavior-based system relying on automatic design process using artificial evolution to synthesize high level behaviors for autonomous agents. Behavioral strategies are described by tree-based structures, and manipulated by generic evolving processes. Each strategy is dynamically evaluated during simulation, and is weighted by an adaptation function as a quality factor that reflects its relevance as good solution for the learning task. It is computed using heterogeneous reinforcement techniques associating immediate reinforcements and delayed reinforcements as dynamic progress estimators.","PeriodicalId":244857,"journal":{"name":"Proceedings International Conference on Multi Agent Systems (Cat. No.98EX160)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115948565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Probabilistic reasoning in a distributed multi-agent environment","authors":"S. Wong, C. Butz","doi":"10.1109/ICMAS.1998.699218","DOIUrl":"https://doi.org/10.1109/ICMAS.1998.699218","url":null,"abstract":"In this paper, a model is proposed for multi-agent probabilistic reasoning in a distributed environment. Unlike other methods, this model is capable of processing input in a truly asynchronous fashion. Asynchronous control protocols and a method for processing evidence are developed to ensure global consistency at all times. The proposed system then extends beyond an interpretive system since the now well-defined concept of a distributed request can be introduced. Techniques are also suggested to reduce data transmission in answering this type of request.","PeriodicalId":244857,"journal":{"name":"Proceedings International Conference on Multi Agent Systems (Cat. No.98EX160)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116503086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An integrated development environment for distributed multi-agent applications","authors":"A. Mehra, V. Chiodini","doi":"10.1109/ICMAS.1998.699281","DOIUrl":"https://doi.org/10.1109/ICMAS.1998.699281","url":null,"abstract":"This paper presents an Agent Development Environment (ADE) for building distributed multi-agent applications. ADE provides a predefined class hierarchy of agents and agent parts, an agent communications \"middleware\", and a graphical language for designing and developing agent behavior based on the Grafcet standard. In addition, ADE provides a distributed simulation environment to test agent-based applications, and a center to deploy agents on the network. In this paper we provide a brief overview of ADE and present two Adaptive Control applications developed with ADE.","PeriodicalId":244857,"journal":{"name":"Proceedings International Conference on Multi Agent Systems (Cat. No.98EX160)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124610364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Designing social cognition models for multi-agent systems through simulating primate societies","authors":"S. Picault, A. Collinot","doi":"10.1109/ICMAS.1998.699055","DOIUrl":"https://doi.org/10.1109/ICMAS.1998.699055","url":null,"abstract":"In this paper we discuss the advantages of investigating primate societies to build Multi-Agent Systems, and we present our preliminary results in this context. We first give an overview of primates' social competences, then we draw a parallel between the main problems found in the study of primate societies (regarding their social organization) and some of the most commonly encountered issues when designing Multi-Agent Systems. We describe a model of social cognition and perception that we have experimented. Its results show that some social concepts can be implemented by attaching importance to the interactions between the agents, instead of using a complicated individual-based model. Finally, we discuss the main extensions we are working on and propose applications to Multi-Agent technology.","PeriodicalId":244857,"journal":{"name":"Proceedings International Conference on Multi Agent Systems (Cat. No.98EX160)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114829090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multi-machine scheduling-a multi-agent learning approach","authors":"W. Brauer, Gerhard Weiss","doi":"10.1109/ICMAS.1998.699030","DOIUrl":"https://doi.org/10.1109/ICMAS.1998.699030","url":null,"abstract":"Multi machine scheduling, that is, the assignment of jobs to machines such that certain performance demands like cost and time effectiveness are fulfilled, is a ubiquitous and complex activity in everyday life. The paper presents an approach to multi machine scheduling that follows the multiagent learning paradigm known from the field of distributed artificial intelligence. According to this approach the machines collectively and as a whole learn and iteratively refine appropriate schedules. The major characteristic of this approach is that learning is distributed over several machines, and that the individual machines carry out their learning activities in a parallel and asynchronous way.","PeriodicalId":244857,"journal":{"name":"Proceedings International Conference on Multi Agent Systems (Cat. No.98EX160)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133748730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The moving target function problem in multi-agent learning","authors":"J. Vidal, E. Durfee","doi":"10.1109/ICMAS.1998.699075","DOIUrl":"https://doi.org/10.1109/ICMAS.1998.699075","url":null,"abstract":"We describe a framework that can be used to model and predict the behavior of MASs with learning agents. It uses a difference equation for calculating the progression of an agent's error in its decision function, thereby telling us how the agent is expected to fare in the MAS. The equation relies on parameters which capture the agents' learning abilities (such as its change rate, learning rate and retention rate) as well as relevant aspects of the MAS (such as the impact that agents have on each other). We validate the framework with experimental results using reinforcement learning agents in a market system, as well as by other experimental results gathered from the AI literature.","PeriodicalId":244857,"journal":{"name":"Proceedings International Conference on Multi Agent Systems (Cat. No.98EX160)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130508239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Information-passing and belief-revision in multi-agent systems","authors":"R. V. Eijk, F. D. Boer, W. Hoek, J. Meyer","doi":"10.1109/ICMAS.1998.699292","DOIUrl":"https://doi.org/10.1109/ICMAS.1998.699292","url":null,"abstract":"We define a programming language for multi agent systems in which agents interact with a common environment and cooperate by exchanging their individual beliefs on the environment. In handling the information they acquire, the agents employ operations to expand remove and update their individual belief bases. The overall framework, which generalizes traditional concurrent programming concepts, is parameterized by an information system of constraints. Such a system is used to represent the environment as well as the beliefs of the agents. We give the syntax of the programming language and develop an operational semantics in terms of a transition system.","PeriodicalId":244857,"journal":{"name":"Proceedings International Conference on Multi Agent Systems (Cat. No.98EX160)","volume":"246 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131719126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A reinforcement learning approach to cooperative problem solving","authors":"Tetsuya Yoshida, K. Hori, S. Nakasuka","doi":"10.1109/ICMAS.1998.699295","DOIUrl":"https://doi.org/10.1109/ICMAS.1998.699295","url":null,"abstract":"We propose an extension of reinforcement learning methods to cooperative problem solving in multi agent systems. Exploiting multiple agents for complex problems is promising, however, learning is necessary since complete domain knowledge is rarely available. The temporal difference algorithm is applied in each agent to learn a heuristic evaluation of states. Besides the reward for solutions produced by agents, we define the reward for coherence as a whole and exploit them to facilitate cooperation among agents for global problem solving. We evaluate the method by experiments on the satellite design problem. The result shows that our method enables agents to learn to cooperate as well as to learn individual heuristics within one framework. Especially, agents themselves learn to take the appropriate balance between exploration and exploitation in problem solving, which is known to greatly affect the performance. It also suggests the possibility of controlling the global behavior of multi agent systems via rewards in reinforcement learning.","PeriodicalId":244857,"journal":{"name":"Proceedings International Conference on Multi Agent Systems (Cat. No.98EX160)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131007913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Heterogeneity, stability, and efficiency in distributed systems","authors":"James D. Thomas, K. Sycara","doi":"10.1109/ICMAS.1998.699069","DOIUrl":"https://doi.org/10.1109/ICMAS.1998.699069","url":null,"abstract":"This paper explores the increasing the heterogeneity of an agent population to stabilize decentralized systems by adding bias terms to each agent's expected payoffs. Two approaches are evaluated, corresponding to heterogeneous preferences and heterogeneous transaction costs; empirically, the transaction cost case provides stability with near optimal payoffs under certain conditions. Theoretically, in the idealized case of an infinite number of agents, it is proven that the system with added heterogeneous preferences has a fired point different from that of the unbiased system, guaranteeing suboptimal perfomance, while the transaction cast case is demonstrated to have a fixed point identical to that of the unbiased system, and it is further shown to be a contraction mapping, guaranteeing convergence. This contraction mapping allows us to conceptualize the model with heterogeneous transaction costs as a decentralized root finding system.","PeriodicalId":244857,"journal":{"name":"Proceedings International Conference on Multi Agent Systems (Cat. No.98EX160)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114142861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}