{"title":"A robot to study the development of artwork appreciation through social interactions","authors":"A. Karaouzene, P. Gaussier, Denis Vidal","doi":"10.1109/DEVLRN.2013.6652554","DOIUrl":"https://doi.org/10.1109/DEVLRN.2013.6652554","url":null,"abstract":"In this work, we present a model based on social referencing mechanisms for the artwork appreciation development. We show that trying to reach autonomous robot artwork preferences is an interesting framework for developmental robotics. Therefore, we present an application of our model in a natural environment. The museum as an experiment place, helps us to benefit from a large number of visitors with different backgrounds and personal interests, as well as provides the challenge of the learning and recognition of an important number of artefacts (heterogeneous set of objects in terms of size, culture). To overcome some limitations like the autonomy in the learning, we proposed a first model for cumulative learning using a second-order conditioning approach with very encouraging results.","PeriodicalId":106997,"journal":{"name":"2013 IEEE Third Joint International Conference on Development and Learning and Epigenetic Robotics (ICDL)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129416301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tucker Hermans, Fuxin Li, James M. Rehg, A. Bobick
{"title":"Learning stable pushing locations","authors":"Tucker Hermans, Fuxin Li, James M. Rehg, A. Bobick","doi":"10.1109/DEVLRN.2013.6652539","DOIUrl":"https://doi.org/10.1109/DEVLRN.2013.6652539","url":null,"abstract":"We present a method by which a robot learns to predict effective push-locations as a function of object shape. The robot performs push experiments at many contact locations on multiple objects and records local and global shape features at each point of contact. The robot observes the outcome trajectories of the manipulations and computes a novel push-stability score for each trial. The robot then learns a regression function in order to predict push effectiveness as a function of object shape. This mapping allows the robot to select effective push locations for subsequent objects whether they are previously manipulated instances, new instances from previously encountered object classes, or entirely novel objects. In the totally novel object case, the local shape property coupled with the overall distribution of the object allows for the discovery of effective push locations. These results are demonstrated on a mobile manipulator robot pushing a variety of household objects on a tabletop surface.","PeriodicalId":106997,"journal":{"name":"2013 IEEE Third Joint International Conference on Development and Learning and Epigenetic Robotics (ICDL)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121859117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Goal babbling with unknown ranges: A direction-sampling approach","authors":"Matthias Rolf","doi":"10.1109/DEVLRN.2013.6652526","DOIUrl":"https://doi.org/10.1109/DEVLRN.2013.6652526","url":null,"abstract":"Goal babbling is a recent concept for the efficient bootstrapping of sensorimotor coordination that is inspired by infants' early goal-directed movement attempts. Several studies have shown its superior performance compared to random motor babbling. Yet, previous implementations of goal babbling require knowledge of a set of achievable goals in advance. This paper introduces an approach to goal babbling that can bootstrap coordination skills without pre-specifying, or even representing, a set of goals. On the contrary, it can discover the ranges of achievable goals autonomously. This capability is demonstrated in a challenging task with up to 50 degrees of freedom, in which the discovery of possible outcomes is shown to be desperately intractable with random motor babbling.","PeriodicalId":106997,"journal":{"name":"2013 IEEE Third Joint International Conference on Development and Learning and Epigenetic Robotics (ICDL)","volume":"692 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126341665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reinforcement learning with state-dependent discount factor","authors":"N. Yoshida, E. Uchibe, K. Doya","doi":"10.1109/DEVLRN.2013.6652533","DOIUrl":"https://doi.org/10.1109/DEVLRN.2013.6652533","url":null,"abstract":"Conventional reinforcement learning algorithms have several parameters which determine the feature of learning process, called meta-parameters. In this study, we focus on the discount factor that influences the time scale of the tradeoff between immediate and delayed rewards. The discount factor is usually considered as a constant value, but we introduce the state-dependent discount function and a new optimization criterion for the reinforcement learning algorithm. We first derive a new algorithm under the criterion, named ExQ-learning and we prove that the algorithm converges to the optimal action-value function in the meaning of new criterion w.p.1. We then present a framework to optimize the discount factor and the discount function by using an evolutionary algorithm. In order to validate the proposed method, we conduct a simple computer simulation and show that the proposed algorithm can find an appropriate state-dependent discount function with which performs better than that with a constant discount factor.","PeriodicalId":106997,"journal":{"name":"2013 IEEE Third Joint International Conference on Development and Learning and Epigenetic Robotics (ICDL)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126497515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anna-Lisa Vollmer, B. Wrede, K. Rohlfing, A. Cangelosi
{"title":"Do beliefs about a robot's capabilities influence alignment to its actions?","authors":"Anna-Lisa Vollmer, B. Wrede, K. Rohlfing, A. Cangelosi","doi":"10.1109/DEVLRN.2013.6652521","DOIUrl":"https://doi.org/10.1109/DEVLRN.2013.6652521","url":null,"abstract":"Interlocutors in a dialog align on many aspects of behavior (word choice, speech rate, syntactic structure, gestures, facial expressions, etc.). Such alignment has been proposed to be the basis for communicating successfully. We believe alignment could be beneficial for smooth human-robot interaction and facilitate robot action learning from demonstration. Recent research put forward a mediated communicative design account of alignment according to which interlocutors align stronger when they believe it will lead to communicative success. Branigan et al. showed that when interacting with an artificial system, participants aligned their lexical choices more to an artificial system they believed to be basic than to one they believed to be advanced. Our work extends these results in two ways: First, instead of an artificial computer dialog system, participants interact with a humanoid robot, the iCub robot. Second, instead of lexical choice, our work investigates alignment in the domain of manual actions. In an action demonstration and matching game, we examine the extent to which participants who believe that they are playing with a basic version or an advanced version of the iCub robot adapt the way they execute actions to what their robot partner has previously shown to them. Our results confirm that alignment also takes place in action demonstration. We were not able to replicate Branigan et al.'s results in general in this setup, but in line with their findings, participants with a low questionnaire score on neuroticism and participants who are familiar with robots aligned their actions more to a robot they believed to be basic than to one they believed to be advanced.","PeriodicalId":106997,"journal":{"name":"2013 IEEE Third Joint International Conference on Development and Learning and Epigenetic Robotics (ICDL)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123990138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
L. Lonini, Yu Zhao, Pramod Chandrashekhariah, Bertram E. Shi, J. Triesch
{"title":"Autonomous learning of active multi-scale binocular vision","authors":"L. Lonini, Yu Zhao, Pramod Chandrashekhariah, Bertram E. Shi, J. Triesch","doi":"10.1109/DEVLRN.2013.6652541","DOIUrl":"https://doi.org/10.1109/DEVLRN.2013.6652541","url":null,"abstract":"We present a method for autonomously learning representations of visual disparity between images from left and right eye, as well as appropriate vergence movements to fixate objects with both eyes. A sparse coding model (perception) encodes sensory information using binocular basis functions, while a reinforcement learner (behavior) generates the eye movement, according to the sensed disparity. Perception and behavior develop in parallel, by minimizing the same cost function: the reconstruction error of the stimulus by the generative model. In order to efficiently cope with multiple disparity ranges, sparse coding models are learnt at multiple scales, encoding disparities at various resolutions. Similarly, vergence commands are defined on a logarithmic scale to allow both coarse and fine actions. We demonstrate the efficacy of the proposed method using the humanoid robot iCub. We show that the model is fully self-calibrating and does not require any prior information about the camera parameters or the system dynamics.","PeriodicalId":106997,"journal":{"name":"2013 IEEE Third Joint International Conference on Development and Learning and Epigenetic Robotics (ICDL)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129155063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards understanding the origin of infant directed speech: A vocal robot with infant-like articulation","authors":"Yuki Sasamoto, Naoto Nishijima, M. Asada","doi":"10.1109/DEVLRN.2013.6652562","DOIUrl":"https://doi.org/10.1109/DEVLRN.2013.6652562","url":null,"abstract":"Infant-Directed Speech (IDS) is the non-standard form of caregivers' speech to their infants. Developmental studies indicate that IDS changes from infant-directed to adult-directed depending on infant's age and/or linguistic level. However, it is still unclear what features in infants cause IDS. This article introduces a vocal robot with an infant-like articulatory system to attack the issue by means of a constructive approach. A preliminary experiment implies that our robot can vocalize structurally similar to infant articulation although it is mechanically rather different.","PeriodicalId":106997,"journal":{"name":"2013 IEEE Third Joint International Conference on Development and Learning and Epigenetic Robotics (ICDL)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127029573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
E. Aksoy, M. Tamosiunaite, Rok Vuga, A. Ude, C. Geib, Mark Steedman, F. Wörgötter
{"title":"Structural bootstrapping at the sensorimotor level for the fast acquisition of action knowledge for cognitive robots","authors":"E. Aksoy, M. Tamosiunaite, Rok Vuga, A. Ude, C. Geib, Mark Steedman, F. Wörgötter","doi":"10.1109/DEVLRN.2013.6652537","DOIUrl":"https://doi.org/10.1109/DEVLRN.2013.6652537","url":null,"abstract":"Autonomous robots are faced with the problem of encoding complex actions (e.g. complete manipulations) in a generic and generalizable way. Recently we had introduced the Semantic Event Chains (SECs) as a new representation which can be directly computed from a stream of 3D images and is based on changes in the relationships between objects involved in a manipulation. Here we show that the SEC framework can be extended (called “extended SEC”) with action-related information and used to achieve and encode two important cognitive properties relevant for advanced autonomous robots: The extended SEC enables us to determine whether an action representation (1) needs to be newly created and stored in its entirety in the robot's memory or (2) whether one of the already known and memorized action representations just needs to be refined. In human cognition these two processes (1 and 2) are known as accommodation and assimilation. Thus, here we show that the extended SEC representation can be used to realize these processes originally defined by Piaget for the first time in a robotic application. This is of fundamental importance for any cognitive agent as it allows categorizing observed actions in new versus known ones, storing only the relevant aspects.","PeriodicalId":106997,"journal":{"name":"2013 IEEE Third Joint International Conference on Development and Learning and Epigenetic Robotics (ICDL)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122746347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Konstantinos Theofilis, K. Lohan, Chrystopher L. Nehaniv, K. Dautenhahn, B. Wrede
{"title":"Temporal emphasis for goal extraction in task demonstration to a humanoid robot by naive users","authors":"Konstantinos Theofilis, K. Lohan, Chrystopher L. Nehaniv, K. Dautenhahn, B. Wrede","doi":"10.1109/DEVLRN.2013.6652536","DOIUrl":"https://doi.org/10.1109/DEVLRN.2013.6652536","url":null,"abstract":"Goal extraction in learning by demonstration is a complex problem. A novel approach, inspired by developmental psychology and focused on use in experiments with naive users, is presented in this paper. Participants were presenting a simple task, how to stack three boxes, to the humanoid robot iCub. The stationary states of the task - 1 box, 2 boxes stacked, 3 boxes stacked - were defined and the time span of each state was measured. Analysis of the results showed that there is a significant result that users tend to keep the boxes stationary longer upon completion of the end goal than upon completion of the sub-goals. A simple and straightforward learning algorithm was then used on the demonstration data, using only the time spans of the stationary states. The learning algorithm successfully detected the end goal. These temporal differences, functioning as emphasis, could be used as a complementary mechanism for goal extraction in imitation learning. Furthermore, it is suggested that since a simple, straightforward learning algorithm can use these pauses to recognise the goal state, humans may also be able to use this pause as a complementary mechanism for recognising the goal state of a task.","PeriodicalId":106997,"journal":{"name":"2013 IEEE Third Joint International Conference on Development and Learning and Epigenetic Robotics (ICDL)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115062060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Adaptive reachability assessment in the humanoid robot iCub","authors":"Salomón Ramírez-Contla, D. Marocco","doi":"10.1109/DEVLRN.2013.6652546","DOIUrl":"https://doi.org/10.1109/DEVLRN.2013.6652546","url":null,"abstract":"We present a model for reachability assessment implemented in a simulated iCub humanoid robot. The robot uses a neural network both for estimating reachability and as a controller for the arm. During training, multi-modality information including vision and proprioception of the effector's length was provided, along with tactile and postural information. The task was to assess if a target in view was at reach range. After training with data from two different effector's lengths, the system generalised also for a third one, both for producing reaching postures and for assessing reachability. We present preliminary results that show good reachability predictions with a decrease in confidence that display a depth gradient.","PeriodicalId":106997,"journal":{"name":"2013 IEEE Third Joint International Conference on Development and Learning and Epigenetic Robotics (ICDL)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132641434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}