{"title":"Bayesian Noisy Word Clustering through Sampling Prototypical Words","authors":"T. Taniguchi, Yuta Fukusako, Toshiaki Takano","doi":"10.1109/DEVLRN.2018.8760503","DOIUrl":"https://doi.org/10.1109/DEVLRN.2018.8760503","url":null,"abstract":"This paper describes a new algorithm for sampling prototypical words from a set of noisy words and proposes a noisy word clustering method. In a lexical acquisition task, phoneme sequences recognized by a developmental robot using a phoneme recognizer have many errors. A letter or phoneme sequence involving errors is called a noisy word. To develop a mixture model for noisy words and develop a clustering method, a procedure needs to be developed for the sampling of a prototypical word, i.e., “mean” string, in a cluster of noisy words. Despite a long history regarding methods for treating noisy words, e.g., a stochastic deformation model, the edit distance and their variants, and an efficient sampling procedure for prototypical words have not been developed. In this paper, the mixture of stochastic deformation models, namely a generative model for noisy words, is proposed, and efficient blocked Gibbs samplers for the model are proposed. To develop this procedure, a forward filtering backward sampling procedure is proposed for jointly decoding noisy words and sampling their “mean” string. We applied the proposed clustering method to a set of noisy synthetic words and obtained better results than a baseline method. In particular, a sampling procedure using tied backward sampling demonstrated the best performance in reconstructing original words from noisy words through a clustering process.","PeriodicalId":236346,"journal":{"name":"2018 Joint IEEE 8th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127609974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alexandre Antunes, Alban Laflaquière, A. Cangelosi
{"title":"Solving Bidirectional Tasks using MTRNN","authors":"Alexandre Antunes, Alban Laflaquière, A. Cangelosi","doi":"10.1109/DEVLRN.2018.8761012","DOIUrl":"https://doi.org/10.1109/DEVLRN.2018.8761012","url":null,"abstract":"In this paper we study the learning of bidirectional tasks in a Recurrent Neural Network (RNN). Most of such models deal with a flow of information in only one direction, either generating outputs or encoding inputs; However, using a single network to do both tasks simultaneously would be more efficient and biologically plausible. We will be using a Multiple Timescales Recurrent Neural Network (MTRNN) to solve these tasks. The network proves capable of dealing with this bidirectional-flow of information simply by training in both directions, with outputs becoming inputs and vice-versa. We showcase this behaviour on two tasks, using the same network. the first is a sentence learning task, akin to a classification problem. The second task is a motor trajectory learning task, akin to a regression problem. The data used in these tasks has been generated through an iCub robot. We present the results of these experiments and show that this model maintains its properties for the bidirectional tasks. We discuss possible future implementations using this ability to solve more complex scenarios such as action and language grounding.","PeriodicalId":236346,"journal":{"name":"2018 Joint IEEE 8th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"257O 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128015215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Namiko Saito, Kitae Kim, Shingo Murata, T. Ogata, S. Sugano
{"title":"Detecting Features of Tools, Objects, and Actions from Effects in a Robot using Deep Learning","authors":"Namiko Saito, Kitae Kim, Shingo Murata, T. Ogata, S. Sugano","doi":"10.1109/DEVLRN.2018.8761029","DOIUrl":"https://doi.org/10.1109/DEVLRN.2018.8761029","url":null,"abstract":"We propose a tool-use model that can detect the features of tools, target objects, and actions from the provided effects of object manipulation. We construct a model that enables robots to manipulate objects with tools, using infant learning as a concept. To realize this, we train sensory-motor data recorded during a tool-use task performed by a robot with deep learning. Experiments include four factors: (1) tools, (2) objects, (3) actions, and (4) effects, which the model considers simultaneously. For evaluation, the robot generates predicted images and motions given information of the effects of using unknown tools and objects. We confirm that the robot is capable of detecting features of tools, objects, and actions by learning the effects and executing the task.","PeriodicalId":236346,"journal":{"name":"2018 Joint IEEE 8th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130871963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alexander Schulz, J. Queißer, H. Ishihara, M. Asada
{"title":"Transfer Learning of Complex Motor Skills on the Humanoid Robot Affetto","authors":"Alexander Schulz, J. Queißer, H. Ishihara, M. Asada","doi":"10.1109/DEVLRN.2018.8761031","DOIUrl":"https://doi.org/10.1109/DEVLRN.2018.8761031","url":null,"abstract":"Although autonomous robots can perform particularly well at highly specific tasks, learning each task in isolation is a very costly process, not only in terms of time but also in terms of hardware wearout and energy usage. Hence, robotic systems need to be able to adapt quickly to new situations in order to be useful in everyday tasks. One way to address this issue is transfer learning, which aims at reusing knowledge obtained in one situation, in a new related one. In this contribution, we develop a drumming scenario with the child robot Affetto where the environment changes such that the scene can only be observed through a mirror. In order to address such domain adaptation problems, we propose a novel transfer learning algorithm that aims at mapping data from the new domain in such a way that the original model is applicable again. We demonstrate this method on an artificial data set as well as in the robot setting.","PeriodicalId":236346,"journal":{"name":"2018 Joint IEEE 8th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122508288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Predictive Models for Robot Ego-Noise Learning and Imitation","authors":"Antonio Pico Villalpando, G. Schillaci, V. Hafner","doi":"10.1109/DEVLRN.2018.8761017","DOIUrl":"https://doi.org/10.1109/DEVLRN.2018.8761017","url":null,"abstract":"We investigate predictive models for robot ego-noise learning and imitation. In particular, we present a framework based on internal models—such as forward and inverse models—that allow a robot to learn how its movements sound like, and to communicate actions to perform to other robots through auditory means. We adopt a developmental approach in the learning of such models, where training sensorimotor data is gathered through self-exploration behaviours. In a simulated experiment presented here, a robot generates specific auditory features from an intended sequence of actions and communicates them for reproduction to another robot, which consequently decodes them into motor commands, using the knowledge of its own motor system. As to the current state, this paper presents an experiment where a robot reproduces auditory sequences previously generated by itself. The presented experiment demonstrates the potentials of the proposed architecture for robot ego-noise learning and for robot communication and imitation through natural means, such as audition. Future work will include situations where different agents use models that are trained with—and thus are specific to—their own self-generated sensorimotor data.","PeriodicalId":236346,"journal":{"name":"2018 Joint IEEE 8th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116827280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jacques Kaiser, Gerd Lindner, J. C. V. Tieck, Martin Schulze, M. Hoff, A. Rönnau, R. Dillmann
{"title":"Microsaccades for asynchronous feature extraction with spiking networks","authors":"Jacques Kaiser, Gerd Lindner, J. C. V. Tieck, Martin Schulze, M. Hoff, A. Rönnau, R. Dillmann","doi":"10.1109/DEVLRN.2018.8761007","DOIUrl":"https://doi.org/10.1109/DEVLRN.2018.8761007","url":null,"abstract":"While extracting spatial features from images has been studied for decades, extracting spatio-temporal features from event streams is still a young field of research. A particularity of event streams is that the same network architecture can be used for recognition of static objects or motions. However, it is not clear what features provide a good abstraction and in what scenario. In this paper, we evaluate the quality of the features of a spiking HMAX architecture by computing classification performance before and after each layer. We demonstrate the abstraction capability of classical edge features, as were found in the V1 area of the visual cortex, combined with fixational eye movements. Specifically, our performance on N-Caltech101 dataset outperforms previously reported $F_{1}$ score on Caltech101, with a similar architecture but without a STDP learning layer. However, we show that the same edge features do not manage to abstract motions observed with a static DVS from the DvsGesture dataset. Additionally, we show that liquid state machines are a promising computational model for the classification of DVS data with temporal dynamics. This paper is a step forward towards understanding and reproducing biological vision.","PeriodicalId":236346,"journal":{"name":"2018 Joint IEEE 8th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134141429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qingsong Ai, Lei Wang, Kun Chen, Anqi Chen, Jiwei Hu, Yilin Fang, Quan Liu, Zude Zhou
{"title":"Cooperative Control of An Ankle Rehabilitation Robot Based on Human Intention","authors":"Qingsong Ai, Lei Wang, Kun Chen, Anqi Chen, Jiwei Hu, Yilin Fang, Quan Liu, Zude Zhou","doi":"10.1109/DEVLRN.2018.8761006","DOIUrl":"https://doi.org/10.1109/DEVLRN.2018.8761006","url":null,"abstract":"Motor imagery electroencephalogram (EEG) is a kind of brain signal induced by subjective consciousness. Relevant studies in the field of sports rehabilitation show that motor imagery training can promote the recovery of damaged nerves and the reconstruction of motor nerve pathways. This paper proposes a human-brain cooperative control strategy of a pneumatic muscle-driven ankle rehabilitation robot based on motor imagery EEG. Robots provide assisted rehabilitation training for patients with impaired neural transmission but with movement intentions. The brain network algorithm is used to select the optimal channels for the motor imagery signal, and the common spatial pattern (CSP) method is combined with the time-frequency analysis method local characteristic-scale decomposition (LCD) to extract the time-frequency information. Finally, the classification is processed by the spectral regression discriminant analysis (SRDA) classifier. In addition, two rehabilitation training modes are designed, namely, synchronous rehabilitation training and asynchronous rehabilitation training. The experimental results prove that a brain intention driven human robot cooperative control method is realized to complete an ankle rehabilitation training task effectively.","PeriodicalId":236346,"journal":{"name":"2018 Joint IEEE 8th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128908992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuji Kawai, Tomohiro Takimoto, Jihoon Park, M. Asada
{"title":"Efficient Reward-Based Learning through Body Representation in a Spiking Neural Network","authors":"Yuji Kawai, Tomohiro Takimoto, Jihoon Park, M. Asada","doi":"10.1109/DEVLRN.2018.8761011","DOIUrl":"https://doi.org/10.1109/DEVLRN.2018.8761011","url":null,"abstract":"Brain-body interactions guide the development of behavioral and cognitive functions. Sensory signals during behavior are relayed to the brain and evoke neural activity. This feedback is important for the organization of neural networks via neural plasticity, which in turn facilitates the generation of motor commands for new behaviors. In this study, we investigated how brain-body interactions develop and affect reward-based learning. We constructed a spiking neural network (SNN) model for the reward-based learning of canonical babbling, i.e., combination of a vowel and consonant. Motor commands to a vocal simulator were generated by SNN output and auditory signals representing the vocalized sound were fed back into the SNN. Synaptic weights in the SNN were updated using spike-timing-dependent plasticity (STDP). Connections from the SNN to the vocal simulator were modulated based on reward signals in terms of saliency of the vocalized sound. Our results showed that, under auditory feedback, STDP enabled the model to rapidly acquire babbling-like vocalization. We found that some neurons in the SNN were more highly activated during vocalization of a consonant than during other sounds. That is, neural dynamics in the SNN adapted to task-related articulator movements. Accordingly, body representation in the SNN facilitated brain-body interaction and accelerated the acquisition of babbling behavior.","PeriodicalId":236346,"journal":{"name":"2018 Joint IEEE 8th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114308512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Junko Kanero, Idil Franko, Cansu Oranç, Orhun Ulusahin, Sümeyye Koskulu, Zeynep Adıgüzel, A. Küntay, T. Göksun
{"title":"Who Can Benefit from Robots? Effects of Individual Differences in Robot-Assisted Language Learning","authors":"Junko Kanero, Idil Franko, Cansu Oranç, Orhun Ulusahin, Sümeyye Koskulu, Zeynep Adıgüzel, A. Küntay, T. Göksun","doi":"10.1109/DEVLRN.2018.8761028","DOIUrl":"https://doi.org/10.1109/DEVLRN.2018.8761028","url":null,"abstract":"It has been suggested that some individuals may benefit more from social robots than do others. Using second language (L2) as an example, the present study examined how individual differences in attitudes toward robots and personality traits may be related to learning outcomes. Preliminary results with 24 Turkish-speaking adults suggest that negative attitudes toward robots, more specifically thoughts and anxiety about the negative social impact that robots may have on the society, predicted how well adults learned L2 words from a social robot. The possible implications of the findings as well as future directions are also discussed.","PeriodicalId":236346,"journal":{"name":"2018 Joint IEEE 8th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123749035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Adaptive and Variational Continuous Time Recurrent Neural Networks","authors":"Stefan Heinrich, Tayfun Alpay, S. Wermter","doi":"10.1109/DEVLRN.2018.8761019","DOIUrl":"https://doi.org/10.1109/DEVLRN.2018.8761019","url":null,"abstract":"In developmental robotics, we model cognitive processes, such as body motion or language processing, and study them in natural real-world conditions. Naturally, these sequential processes inherently occur on different continuous timescales. Similar as our brain can cope with them by hierarchical abstraction and coupling of different processing modes, computational recurrent neural models need to be capable of adapting to temporally different characteristics of sensorimotor information. In this paper, we propose adaptive and variational mechanisms that can tune the timescales in Continuous Time Recurrent Neural Networks (CTRNNs) to the characteristics of the data. We study these mechanisms in both synthetic and natural sequential tasks to contribute to a deeper understanding of how the networks develop multiple timescales and represent inherent periodicities and fluctuations. Our findings include that our Adaptive CTRNN (ACTRNN) model self-organises timescales towards both representing short-term dependencies and modulating representations based on long-term dependencies during end-to-end learning.","PeriodicalId":236346,"journal":{"name":"2018 Joint IEEE 8th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125524417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}