{"title":"移动自主智能体模块化神经网络体系结构中的学习行为","authors":"R. M. Rylatt, C. Czarnecki, T. Routen","doi":"10.1109/EURBOT.1996.551886","DOIUrl":null,"url":null,"abstract":"The relatively new idea of decomposing the intelligent agent problem into behaviours rather than cognitive functions has had early success but doubts have arisen concerning the validity of its basic building blocks. It may still be worth retaining the idea that intelligent behaviour can be achieved through the accretion of modules each having a tight loop between perception and action but modules based on neural networks may have more potential. One approach that has been demonstrated is to train each module to achieve ifs individual competence. But although explicit teaching may play a part, a more interesting approach is to allow the agent to learn behaviours by interacting directly with the task environment. This paper presents a modular neural net architecture CRILL for the autonomous control of a mobile agent using a form of reinforcement learning. An experiment is described in which CRILL navigates through a simulated environment by seeking a series of light sources. The potential of the CRILL approach is assessed as a way of decomposing a complex goal and simplifying the construction of individual neural nets.","PeriodicalId":136786,"journal":{"name":"Proceedings of the First Euromicro Workshop on Advanced Mobile Robots (EUROBOT '96)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1996-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Learning behaviours in a modular neural net architecture for a mobile autonomous agent\",\"authors\":\"R. M. Rylatt, C. Czarnecki, T. Routen\",\"doi\":\"10.1109/EURBOT.1996.551886\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The relatively new idea of decomposing the intelligent agent problem into behaviours rather than cognitive functions has had early success but doubts have arisen concerning the validity of its basic building blocks. It may still be worth retaining the idea that intelligent behaviour can be achieved through the accretion of modules each having a tight loop between perception and action but modules based on neural networks may have more potential. One approach that has been demonstrated is to train each module to achieve ifs individual competence. But although explicit teaching may play a part, a more interesting approach is to allow the agent to learn behaviours by interacting directly with the task environment. This paper presents a modular neural net architecture CRILL for the autonomous control of a mobile agent using a form of reinforcement learning. An experiment is described in which CRILL navigates through a simulated environment by seeking a series of light sources. The potential of the CRILL approach is assessed as a way of decomposing a complex goal and simplifying the construction of individual neural nets.\",\"PeriodicalId\":136786,\"journal\":{\"name\":\"Proceedings of the First Euromicro Workshop on Advanced Mobile Robots (EUROBOT '96)\",\"volume\":\"26 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1996-10-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the First Euromicro Workshop on Advanced Mobile Robots (EUROBOT '96)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/EURBOT.1996.551886\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the First Euromicro Workshop on Advanced Mobile Robots (EUROBOT '96)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/EURBOT.1996.551886","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Learning behaviours in a modular neural net architecture for a mobile autonomous agent
The relatively new idea of decomposing the intelligent agent problem into behaviours rather than cognitive functions has had early success but doubts have arisen concerning the validity of its basic building blocks. It may still be worth retaining the idea that intelligent behaviour can be achieved through the accretion of modules each having a tight loop between perception and action but modules based on neural networks may have more potential. One approach that has been demonstrated is to train each module to achieve ifs individual competence. But although explicit teaching may play a part, a more interesting approach is to allow the agent to learn behaviours by interacting directly with the task environment. This paper presents a modular neural net architecture CRILL for the autonomous control of a mobile agent using a form of reinforcement learning. An experiment is described in which CRILL navigates through a simulated environment by seeking a series of light sources. The potential of the CRILL approach is assessed as a way of decomposing a complex goal and simplifying the construction of individual neural nets.