{"title":"Developing neural networks with neurons competing for survival","authors":"Zhen Peng, Daniel A. Braun","doi":"10.1109/DEVLRN.2015.7346133","DOIUrl":"https://doi.org/10.1109/DEVLRN.2015.7346133","url":null,"abstract":"We study developmental growth in a feedforward neural network model inspired by the survival principle in nature. Each neuron has to select its incoming connections in a way that allow it to fire, as neurons that are not able to fire over a period of time degenerate and die. In order to survive, neurons have to find reoccurring patterns in the activity of the neurons in the preceding layer, because each neuron requires more than one active input at any one time to have enough activation for firing. The sensory input at the lowest layer therefore provides the maximum amount of activation that all neurons compete for. The whole network grows dynamically over time depending on how many patterns can be found and how many neurons can maintain themselves accordingly. If a neuron has found a stable firing pattern, a new neuron is created in the same layer. It is also made sure that there is always at least one neuron in each activated layer that is searching for novel patterns. If a layer stops growing for a certain amount of time, a new layer is created starting with a single neuron.","PeriodicalId":164756,"journal":{"name":"2015 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124357298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Banafsheh Rekabdar, M. Nicolescu, M. Nicolescu, Richard Kelley
{"title":"A biologically inspired approach to learning spatio-temporal patterns","authors":"Banafsheh Rekabdar, M. Nicolescu, M. Nicolescu, Richard Kelley","doi":"10.1109/DEVLRN.2015.7346159","DOIUrl":"https://doi.org/10.1109/DEVLRN.2015.7346159","url":null,"abstract":"This paper presents an unsupervised approach for learning and classifying patterns that have spatio-temporal structure, using a spike-timing neural network with axonal conductance delays, from a very small set of training samples. Spatio-temporal patterns are converted into spike trains, which can be used to train the network with spike-timing dependent plasticity learning. A pattern is encoded as a string of “characters,” in which each character is a set of neurons that fired at a particular time step, as a result of the network being stimulated with the corresponding input. For classification we compute a similarity measure between a new sample and the training examples, based on the longest common subsequence dynamic programming algorithm to develop a fully unsupervised approach. The approach is tested on a dataset of hand-written digits, which include spatial and temporal information, with results comparable with other state-of-the-art supervised learning approaches.","PeriodicalId":164756,"journal":{"name":"2015 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129129157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Learning to pick up objects through active exploration","authors":"John G. Oberlin, Stefanie Tellex","doi":"10.1109/DEVLRN.2015.7346151","DOIUrl":"https://doi.org/10.1109/DEVLRN.2015.7346151","url":null,"abstract":"Robots need to perceive and manipulate objects in their environment, yet robust object manipulation remains a challenging problem. Many aspects of a perception and manipulation system need to be customized for a particular object and environment, such as where to grasp an object, what algorithm to use for segmentation, and at which height to visually servo above an object on the table. To address these limitations, we propose an approach for enabling a robot to learn about objects through active exploration and adapt its grasping model accordingly. We frame the problem of model adaptation as a bandit problem, specifically the identification of the best of the arms of an N-armed bandit, [5] where the robot aims to minimize simple regret after a finite exploration period [1]. Our robot can obtain a high-quality reward signal (although sometimes at a higher cost in time and sensing) by actively collecting additional information from the environment, and use this reward signal to adaptively identify grasp points that are likely to succeed. This paper provides an overview of our previous work [3] using this approach to actively infer grasp points and adds a description of our efforts learning the height at which to servo to an object.","PeriodicalId":164756,"journal":{"name":"2015 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117145256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The dynamics of idealized attention in complex learning environments","authors":"Madeline Pelz, S. Piantadosi, Celeste Kidd","doi":"10.1109/DEVLRN.2015.7346147","DOIUrl":"https://doi.org/10.1109/DEVLRN.2015.7346147","url":null,"abstract":"Effective allocation of attention is crucial for many cognitive functions, and attentional disorders (e.g., ADHD) negatively impact learning. Despite the importance of the attentional system, the origins of inattentional behavior remain hazy. Here we present a model of an ideal learner that maximizes information gain in an environment containing multiple objects, each containing a set amount of information to be learned. When constraints on the speed of information decay and ease of shifting attention between objects are added to the system, patterns of attentional switching behavior emerge. These predictions can account for results reported from multiple object tracking tasks. Further, they highlight multiple possible causes underlying the atypical behaviors associated with attentional disorders.","PeriodicalId":164756,"journal":{"name":"2015 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123152039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"First steps towards the development of the sense of object permanence in robots","authors":"Sarah Bechtle, G. Schillaci, V. Hafner","doi":"10.1109/DEVLRN.2015.7346157","DOIUrl":"https://doi.org/10.1109/DEVLRN.2015.7346157","url":null,"abstract":"Evidence in developmental studies showed that infants, around the age of three months, are already able to represent and to reason about hidden objects [1]. We investigate the development of the sense of object permanence in robots. In the preliminary experiment presented here, a humanoid robot has to learn how the movements of its arms affect the visual detection of an object in the scene. The robot is holding a shield in its left hand, which can eventually hide the object from the visual input. As learning mechanism, we adopted a goal-directed exploration behaviour inspired on human development: the Intelligent Adaptive Curiosity (IAC) proposed by Oudeyer, Kaplan and Hafner [2]. We present an implementation of IAC on the humanoid robot Aldebaran Nao and we compare its performance with that of a random exploration strategy.","PeriodicalId":164756,"journal":{"name":"2015 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114649999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Modeling infant visual preference as perceptual oscillation","authors":"B. Balas, L. Oakes","doi":"10.1109/DEVLRN.2015.7345451","DOIUrl":"https://doi.org/10.1109/DEVLRN.2015.7345451","url":null,"abstract":"Infants' visual recognition abilities are typically studied using variations of preferential looking paradigms. In this broad class of tasks, the extent to which infants discriminate between, categorize, and recognize complex images is determined by which of two test images they prefer to look at. This preference is usually expressed by calculating the proportion of total looking time allocated to a target stimulus (e.g., the stimulus that is more novel) on each trial. Although this coarse description of infant looking behavior has been sufficient to reveal a wide range of important effects, it also potentially obscures great deal of important visual behavior. As a result, we know less about changes in infant looking over learning and development than we would if visual behavior were measured in other ways. We argue that deeper understanding of learning and development of infants' visual behavior requires appreciation of the dynamics of that behavior: During any individual trial, infants look back and forth between stimuli several times. These oscillations between stimuli may reflect aspects of visual processing that have been heretofore overlooked. We suggest that modeling the distribution of look durations made across trials provides a rich description of looking behavior that makes it possible to approach preferential looking as a form of perceptual oscillation, and may provide additional understanding into learning and development. Here we show how fitting the parameters of a gamma distribution to infants' look durations in a face recognition task allows us to see effects that are not evident when simpler descriptors are used and discuss how this approach supports the interpretation of infant behavioral data in the context of neural models of visual competition.","PeriodicalId":164756,"journal":{"name":"2015 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115524467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Comparison of absolute and relative strategies to encode sensorimotor transformations in tool-use","authors":"R. Braud, Alexandre Pitti, P. Gaussier","doi":"10.1109/DEVLRN.2015.7346154","DOIUrl":"https://doi.org/10.1109/DEVLRN.2015.7346154","url":null,"abstract":"We explore different strategies to overcome the problem of sensorimotor transformation that babies face during development, especially in the case of tool-use. From a developmental perspective, we investigate a model based on absolute coordinate frames of reference, and another one based on relative coordinate frames of reference. In a situation of sensorimotor learning and of adaptation to tool-use, we perform a computer simulation of a 4 degrees of freedom robot. We show that the relative coordinate strategy is the most rapid and robust to re-adapt the neural code.","PeriodicalId":164756,"journal":{"name":"2015 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123434941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Mastering human-robot interaction control techniques using Chinese Tai Chi Chuan: Mutual learning, intention detection, impedance adaptation, and force borrowing","authors":"Ker-Jiun Wang, Mingui Sun, Lan Zhang, Zhihong Mao","doi":"10.1109/DEVLRN.2015.7346123","DOIUrl":"https://doi.org/10.1109/DEVLRN.2015.7346123","url":null,"abstract":"Physical human-robot interaction (pHRI) has become an important research topic in recent years. It involves close physical body contact between human and robots, which is a very critical technology to enable human-robot symbiosis for our future society. An illustrative example is the development of wearable robots [1], where the wearer can extend or enhance functionalities of his limbs. Since wearable robots are worn in parallel and moving synchronously with the human body, human-in-the-loop control plays an important role. Human and robot are no longer two separate entities that make their own decisions. In contrast, they jointly react to the world according to their mutual behaviors and control strategies. Any intelligent decision making of each controller has to consider the other one's changing dynamics as part of its feedback loop, which is a two-way bilateral control structure.","PeriodicalId":164756,"journal":{"name":"2015 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123127340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marie-Lou Barnaud, J. Diard, P. Bessière, J. Schwartz
{"title":"COSMO, a Bayesian computational model of speech communication: Assessing the role of sensory vs. motor knowledge in speech perception","authors":"Marie-Lou Barnaud, J. Diard, P. Bessière, J. Schwartz","doi":"10.1109/DEVLRN.2015.7346149","DOIUrl":"https://doi.org/10.1109/DEVLRN.2015.7346149","url":null,"abstract":"It is now widely accepted that there is a functional relationship between the speech perception and production systems in the human brain. However, the precise mechanisms and role of this relationship still remain debated.","PeriodicalId":164756,"journal":{"name":"2015 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123332129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Emre Ugur, Jimmy Baraglia, Lars Schillingmann, Y. Nagai
{"title":"Use of speech and motion cues for bootstrapping complex action learning in iCub","authors":"Emre Ugur, Jimmy Baraglia, Lars Schillingmann, Y. Nagai","doi":"10.1109/DEVLRN.2015.7346119","DOIUrl":"https://doi.org/10.1109/DEVLRN.2015.7346119","url":null,"abstract":"Parental scaffolding is an important mechanism that speeds up infant sensorimotor development. Infants pay stronger attention to the features of the objects highlighted by parents, and their skills develop earlier than they would in isolation due to caregivers support. Parents are known to make modifications in infant-directed actions, called “motionese”, which is characterized by a wider range of motion, repetitive actions, and longer and more pauses between movements. Inspired from motionese, we previously realized a robotic system [1] where the affordances and effect prediction capabilities that are learned in the previous stages of development are used to bootstrap complex imitation and action learning with the help a cooperative tutor through motionese. With this system, a robot could learn new skills via imitation learning by extracting the important steps from the observed movement trajectory, and then encoding them as subgoals that it can fulfill. Considering the affordances provided by the objects in the environment, it found and sequentially executed the actions that are predicted to generate the desired effects and achieve the subgoals; achieving the overall goal of complete imitation. We showed that motionese can be used to bridge the gap between the interacting agents with different movement capabilities, such as the human tutors and the arm-hand robot we employed. Furthermore, our experimental data indicated that naïve tutors who are not informed about the imitation mechanisms of the robot, changed their teaching strategy, and started to display motionese.","PeriodicalId":164756,"journal":{"name":"2015 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129319273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}