{"title":"Development of object manipulation through self-exploratory visuomotor experience","authors":"Kenta Kawamoto, K. Noda, Takashi Hasuo, K. Sabe","doi":"10.1109/DEVLRN.2011.6037362","DOIUrl":"https://doi.org/10.1109/DEVLRN.2011.6037362","url":null,"abstract":"Human infants learn to interpret their visuomotor experience and predict the effects of their actions by practicing interactions with the environment. This paper presents a computational model of this process that can be implemented in an artificial agent. We first present a mechanism for simultaneous segmentation and modeling of the agent's body, movable objects and their visual environment. This model can explain a “sense of agency” in terms of predictive certainty of object movements conditioned by actions. We then describe causality learning for object manipulation in detail. Our experimental setup requires a model considering combinational causalities beyond simple direct causality. A novel strategy for causal exploration is proposed and its effectiveness is shown in experiments. The results show that the proposed model allows an agent to efficiently acquire object manipulation skills through self-exploratory visuomotor experience, that is, a sequence of pairs of raw bitmap image and taken actions at each time step.","PeriodicalId":256921,"journal":{"name":"2011 IEEE International Conference on Development and Learning (ICDL)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133309506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Ignorance is bliss: A complexity perspective on adapting reactive architectures","authors":"T. Wareham, J. Kwisthout, W. Haselager, I. Rooij","doi":"10.1109/DEVLRN.2011.6037337","DOIUrl":"https://doi.org/10.1109/DEVLRN.2011.6037337","url":null,"abstract":"We study the computational complexity of adapting a reactive architecture to meet task constraints. This computational problem has application in a wide variety of fields, including cognitive and evolutionary robotics and cognitive neuroscience. We show that—even for a rather simple world and a simple task—adapting a reactive architecture to perform a given task in the given world is NP-hard. This result implies that adapting reactive architectures is computationally intractable regardless the nature of the adaptation process (e.g., engineering, development, evolution, learning, etc.) unless very special conditions apply. In order to find such special conditions for tractability, we have performed parameterized complexity analyses. One of our main findings is that architectures with limited sensory and perceptual abilities are efficiently adaptable.","PeriodicalId":256921,"journal":{"name":"2011 IEEE International Conference on Development and Learning (ICDL)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130846101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Intrinsic Activitity: from motor babbling to play","authors":"Mark H. Lee","doi":"10.1109/DEVLRN.2011.6037375","DOIUrl":"https://doi.org/10.1109/DEVLRN.2011.6037375","url":null,"abstract":"This paper presents the hypothesis that intrinsic, apparently goal-free, motor-centric activity is a fundamental and necessary component of cognitive development in truly autonomous intelligent agents, both human and artificial.","PeriodicalId":256921,"journal":{"name":"2011 IEEE International Conference on Development and Learning (ICDL)","volume":"225 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123639604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Sequential pattern mining of multimodal data streams in dyadic interactions","authors":"Damian Fricker, Hui Zhang, Chen Yu","doi":"10.1109/DEVLRN.2011.6037334","DOIUrl":"https://doi.org/10.1109/DEVLRN.2011.6037334","url":null,"abstract":"In this paper we propose a sequential pattern mining method to analyze multimodal data streams using a quantitative temporal approach. While the existing algorithms can only find sequential orders of temporal events, this paper presents a new temporal data mining method focusing on extracting exact timings and durations of sequential patterns extracted from multiple temporal event streams. We present our method with its application to the detection and extraction of human sequential behavioral patterns over multiple multimodal data streams in human-robot interactions. Experimental results confirmed the feasibility and quality of our proposed pattern mining algorithm, and suggested a quantitative data-driven way to ground social interactions in a manner that has never been achieved before.","PeriodicalId":256921,"journal":{"name":"2011 IEEE International Conference on Development and Learning (ICDL)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121711065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Mukerjee, Nikhil Joshi, Prabhat Mudgal, S. Srinath
{"title":"Bootstrapping word learning: A perception driven semantics-first approach","authors":"A. Mukerjee, Nikhil Joshi, Prabhat Mudgal, S. Srinath","doi":"10.1109/DEVLRN.2011.6037345","DOIUrl":"https://doi.org/10.1109/DEVLRN.2011.6037345","url":null,"abstract":"In recent decades, evidence for preverbal perceptual categorization in infants has been accumulating, and a role has been suggested for such processes in bootstrapping word learning, i.e. acquiring the very first word-meaning associations. We propose a computational study to consider the possibility that initial notions of semantic classes may help word learning. We consider visual scenes with many possible referents, and consider unparsed linguistic descriptions in text form. We use no prior knowledge of vision domain, or of morphology, syntax or word frequency. Using a synthetic model of object attention, we show that the system is able to first identify perceptual classes from the visual stream, and then associate these with words from the linguistic stream. Working with Hindi text, we demonstrate the ability to learn words for prominent proto-concepts like BICYCLE, TRUCK, and CAR from a complex traffic video. We compare the associations when learning unsegmented poly-syllabic strings in the language (without knowledge of word boundaries) versus segmented words, and find that the poly-syllables do nearly as well. This suggests that early acquisition of some semantic classes may also help in parsing the input stream into “words”. The model is then used on a novel video from a similar domain, to identify objects with their labels. Since we provide no knowledge to the system either for the visual or language analyses, the results are likely to hold for other visual scenes and languages.","PeriodicalId":256921,"journal":{"name":"2011 IEEE International Conference on Development and Learning (ICDL)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115665625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Making a robotic scene representation accessible to feature and label queries","authors":"Stephan K. U. Zibner, C. Faubel, G. Schöner","doi":"10.1109/DEVLRN.2011.6037360","DOIUrl":"https://doi.org/10.1109/DEVLRN.2011.6037360","url":null,"abstract":"We present a neural architecture for scene representation that stores semantic information about objects in the robot's workspace. We show how this representation can be queried both through low-level features such as color and size, through feature conjunctions, as well as through symbolic labels. This is possible by binding different feature dimensions through space and integrating these space-feature representations with an object recognition system. Queries lead to the activation of a neural representation of previously seen objects, which can then be used to drive object-oriented action. The representation is continuously linked to sensory information and autonomously updates when objects are moved or removed.","PeriodicalId":256921,"journal":{"name":"2011 IEEE International Conference on Development and Learning (ICDL)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115807915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Ego-centric and allo-centric abstraction in self-organized hierarchical neural networks","authors":"M. Maniadakis, J. Tani, P. Trahanias","doi":"10.1109/DEVLRN.2011.6037347","DOIUrl":"https://doi.org/10.1109/DEVLRN.2011.6037347","url":null,"abstract":"The computational systems supporting the cognitive capacity of artificial agents are often structured hierarchically, with sensory-motor details placed in the lower levels, and abstracted conceptual items in the upper levels. Such an architecture mimics the structural properties of the animal and human nervous system. To operate efficiently in varying circumstances, artificial agents are necessary to consider both ego-centric (i.e. self-centered) and allo-centric (i.e. other-centered) information, which are further combined to address given tasks. The present work investigates effective assemblies for simultaneously placing ego-centric and allo-centric processes in the cognitive hierarchy, by evolving self-organized neural network controllers. The systematic study of the internal network mechanisms has showed that effective neural assemblies are developed by placing allo-centric information in the upper levels of the cognitive hierarchy, followed by ego-centric abstracted representations in the middle and finally sensory-motor details in the lower level. We present and discuss the obtained results considering how they are related with known assumptions about human brain functionality.","PeriodicalId":256921,"journal":{"name":"2011 IEEE International Conference on Development and Learning (ICDL)","volume":"171 2-3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116637174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Quan Wang, Pramod Chandrashekhariah, Gabriele Spina
{"title":"Familiarity-to-novelty shift driven by learning: A conceptual and computational model","authors":"Quan Wang, Pramod Chandrashekhariah, Gabriele Spina","doi":"10.1109/DEVLRN.2011.6037314","DOIUrl":"https://doi.org/10.1109/DEVLRN.2011.6037314","url":null,"abstract":"We propose a new theory explaining the familiarity-to-novelty shift in infant habituation. In our account, infants' interest in a stimulus is related to their learning progress, i.e. the improvement of an internal model of the stimulus. Specifically, we propose infants prefer the stimulus for which its current learning progress is maximal. We also propose a new algorithm called Selective Learning Self Organizing Map (SL-SOM), a biologically inspired modification to SOM, exhibiting familiarity-to-novelty shift. Using this algorithm we present experiments on a robotic platform.","PeriodicalId":256921,"journal":{"name":"2011 IEEE International Conference on Development and Learning (ICDL)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114558280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"What a successful grasp tells about the success chances of grasps in its vicinity","authors":"L. Bodenhagen, R. Detry, J. Piater, N. Krüger","doi":"10.1109/DEVLRN.2011.6037342","DOIUrl":"https://doi.org/10.1109/DEVLRN.2011.6037342","url":null,"abstract":"Infants gradually improve their grasping competences, both in terms of motor abilities as well as in terms of the internal shape grasp representations. Grasp densities [3] provide a statistical model of such an internal learning process. In the concept of grasp densities, kernel density estimation is used based on a six-dimensional kernel representing grasps with given position and orientation. For this so far an isotropic kernel has been used which exact shape have only been weakly justified. Instead in this paper, we use an anisotropic kernel that is statistically based on measured conditional probabilities representing grasp success in the neighborhood of a successful grasp. The anisotropy has been determined utilizing a simulation environment that allowed for evaluation of large scale experiments. The anisotropic kernel has been fitted to the conditional probabilities obtained from the experiments. We then show that convergence is an important problem associated with the grasp density approach and we propose a measure for the convergence of the densities. In this context, we show that the use of the statistically grounded anisotropic kernels leads to a significantly faster convergence of grasp densities.","PeriodicalId":256921,"journal":{"name":"2011 IEEE International Conference on Development and Learning (ICDL)","volume":"143 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124589572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robust central pattern generators for embodied hierarchical reinforcement learning","authors":"M. Snel, Shimon Whiteson, Y. Kuniyoshi","doi":"10.1109/DEVLRN.2011.6037352","DOIUrl":"https://doi.org/10.1109/DEVLRN.2011.6037352","url":null,"abstract":"Hierarchical organization of behavior and learning is widespread in animals and robots, among others to facilitate dealing with multiple tasks. In hierarchical reinforcement learning, agents usually have to learn to recombine or modulate low-level behaviors when facing a new task, which costs time that could potentially be saved by employing intrinsically adaptive low-level controllers. At the same time, although there exists extensive research on the use of pattern generators as low-level controllers for robot motion, the effect of their potential adaptivity on high-level performance on multiple tasks has not been explicitly studied. This paper investigates this effect using a dynamically simulated hexapod robot that needs to complete a high-level learning task on terrains of varying complexity. Results show that as terrain difficulty increases and adaptivity to environmental disturbances becomes more important, low-level controllers with a degree of instability have a positive impact on high-level performance. In particular, these controllers provide an initial performance boost that is maintained throughout learning, showing that their instability does not negatively affect their predictability, which is important for learning.","PeriodicalId":256921,"journal":{"name":"2011 IEEE International Conference on Development and Learning (ICDL)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127646332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}