{"title":"Neural Network Learning of Context-Dependent Affordances","authors":"Luca Simione, A. Borghi, S. Nolfi","doi":"10.33969/ais.2022040106","DOIUrl":null,"url":null,"abstract":"In this paper, we investigated whether affordances are activated automatically, independently of the context in which they are experienced, or not. The first hypothesis postulates that stimuli affording different actions in different contexts tend to activate all actions initially. The action appropriate to the current context is later selected through a competitive process. The second hypothesis instead postulates that only the action appropriate to the current context is activated. The apparent tension between these two alternative hypotheses constitutes an open issue since, in some cases, experimental evidence supports the context-independent hypothesis, while in other cases it supports the context-dependent hypothesis. To study this issue, we trained a deep neural network with stimuli in which action inputs co-varied systematically with visual inputs. The neural network included two separate pathways for encoding visual and action inputs with two hidden layers each, and then a common hidden layer. The training was realized through an auto-associative unsupervised learning algorithm and the testing was conducted by presenting only part of the stimulus to the neural network, to study its generative properties. As a result of the training process, the network formed visual-action affordances. Furthermore, we conducted the training process in different contexts in which the relation between stimuli and actions varied. The analysis of the obtained results indicates that the network displays both a context-dependent activation of affordances (i.e., the action appropriate to the current context tends to be more activated than the alternative action) and a competitive process that refines action selection (i.e., that increases the offset between the activation of the appropriate and unappropriate actions). Overall, this suggests that the apparent contradiction between the two hypotheses can be resolved. Moreover, our analysis indicates that the greater facility with which colour-action associations are acquired with respect to shape-action associations is because the representation of surface features, such as colour, tends to be more readily available for deeper features, such as shape. Our results support the feasibility of human-like affordance acquisition in artificial neural networks trained using a deep learning algorithm. This model could be further applied to a number of robotic and applicative scenarios.","PeriodicalId":273028,"journal":{"name":"Journal of Artificial Intelligence and Systems","volume":"22 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Artificial Intelligence and Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.33969/ais.2022040106","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In this paper, we investigated whether affordances are activated automatically, independently of the context in which they are experienced, or not. The first hypothesis postulates that stimuli affording different actions in different contexts tend to activate all actions initially. The action appropriate to the current context is later selected through a competitive process. The second hypothesis instead postulates that only the action appropriate to the current context is activated. The apparent tension between these two alternative hypotheses constitutes an open issue since, in some cases, experimental evidence supports the context-independent hypothesis, while in other cases it supports the context-dependent hypothesis. To study this issue, we trained a deep neural network with stimuli in which action inputs co-varied systematically with visual inputs. The neural network included two separate pathways for encoding visual and action inputs with two hidden layers each, and then a common hidden layer. The training was realized through an auto-associative unsupervised learning algorithm and the testing was conducted by presenting only part of the stimulus to the neural network, to study its generative properties. As a result of the training process, the network formed visual-action affordances. Furthermore, we conducted the training process in different contexts in which the relation between stimuli and actions varied. The analysis of the obtained results indicates that the network displays both a context-dependent activation of affordances (i.e., the action appropriate to the current context tends to be more activated than the alternative action) and a competitive process that refines action selection (i.e., that increases the offset between the activation of the appropriate and unappropriate actions). Overall, this suggests that the apparent contradiction between the two hypotheses can be resolved. Moreover, our analysis indicates that the greater facility with which colour-action associations are acquired with respect to shape-action associations is because the representation of surface features, such as colour, tends to be more readily available for deeper features, such as shape. Our results support the feasibility of human-like affordance acquisition in artificial neural networks trained using a deep learning algorithm. This model could be further applied to a number of robotic and applicative scenarios.