{"title":"不确定语义、表示干扰和自举代理的必要不变性","authors":"A. Censi, R. Murray","doi":"10.1109/DEVLRN.2011.6037313","DOIUrl":null,"url":null,"abstract":"In the problem of bootstrapping, an agent must learn to use an unknown body, in an unknown world, starting from zero information about the world, its sensors, and its actuators. So far, this fascinating problem has not; been given a proper normalization. In this paper, we provide a possible rigorous definition of one of the key aspects of bootstrapping, namely the fact that an agent must be able to use “uninterpreted” observations and commands. We show that this can be formalized by positing the existence of representation nuisances that act on the data, and which must be tolerated by an agent. The classes of nuisances tolerate d in directly encode the assumptions needed about the world, and therefore the agent's ability to solve smaller or larger classes of bootstrapping problem instances. Moreover, we argue that the behavior of an agent that claims optimality must actually be invariant to the representation nuisances, and we discuss several design principles to obtain such invariance.","PeriodicalId":256921,"journal":{"name":"2011 IEEE International Conference on Development and Learning (ICDL)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2011-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"Uncertain semantics, representation nuisances, and necessary invariance properties of bootstrapping agents\",\"authors\":\"A. Censi, R. Murray\",\"doi\":\"10.1109/DEVLRN.2011.6037313\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In the problem of bootstrapping, an agent must learn to use an unknown body, in an unknown world, starting from zero information about the world, its sensors, and its actuators. So far, this fascinating problem has not; been given a proper normalization. In this paper, we provide a possible rigorous definition of one of the key aspects of bootstrapping, namely the fact that an agent must be able to use “uninterpreted” observations and commands. We show that this can be formalized by positing the existence of representation nuisances that act on the data, and which must be tolerated by an agent. The classes of nuisances tolerate d in directly encode the assumptions needed about the world, and therefore the agent's ability to solve smaller or larger classes of bootstrapping problem instances. Moreover, we argue that the behavior of an agent that claims optimality must actually be invariant to the representation nuisances, and we discuss several design principles to obtain such invariance.\",\"PeriodicalId\":256921,\"journal\":{\"name\":\"2011 IEEE International Conference on Development and Learning (ICDL)\",\"volume\":\"25 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2011-10-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2011 IEEE International Conference on Development and Learning (ICDL)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/DEVLRN.2011.6037313\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2011 IEEE International Conference on Development and Learning (ICDL)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DEVLRN.2011.6037313","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Uncertain semantics, representation nuisances, and necessary invariance properties of bootstrapping agents
In the problem of bootstrapping, an agent must learn to use an unknown body, in an unknown world, starting from zero information about the world, its sensors, and its actuators. So far, this fascinating problem has not; been given a proper normalization. In this paper, we provide a possible rigorous definition of one of the key aspects of bootstrapping, namely the fact that an agent must be able to use “uninterpreted” observations and commands. We show that this can be formalized by positing the existence of representation nuisances that act on the data, and which must be tolerated by an agent. The classes of nuisances tolerate d in directly encode the assumptions needed about the world, and therefore the agent's ability to solve smaller or larger classes of bootstrapping problem instances. Moreover, we argue that the behavior of an agent that claims optimality must actually be invariant to the representation nuisances, and we discuss several design principles to obtain such invariance.