{"title":"Understanding Object Weight from Human and Humanoid Lifting Actions","authors":"A. Sciutti, Laura Patanè, F. Nori, G. Sandini","doi":"10.1109/TAMD.2014.2312399","DOIUrl":"https://doi.org/10.1109/TAMD.2014.2312399","url":null,"abstract":"Humans are very good at interacting with each other. This natural ability depends, among other factors, on an implicit communication mediated by motion observation. By simple action observation we can easily infer not only the goal of an agent, but often also some “hidden” properties of the object he is manipulating, as its weight or its temperature. This implicit understanding is developed early in childhood and is supposedly based on a common motor repertoire between the cooperators. In this paper, we have investigated whether and under which conditions it is possible for a humanoid robot to foster the same kind of automatic communication, focusing on the ability to provide cues about object weight with action execution. We have evaluated on which action properties weight estimation is based in humans and we have accordingly designed a set of simple robotic lifting behaviors. Our results show that subjects can reach a performance in weight recognition from robot observation comparable to that obtained during human observation, with no need of training. These findings suggest that it is possible to design robot behaviors that are implicitly understandable by nonexpert partners and that this approach could be a viable path to obtain more natural human-robot collaborations.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"6 1","pages":"80-92"},"PeriodicalIF":0.0,"publicationDate":"2014-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2014.2312399","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62763065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Attentional Mechanisms for Socially Interactive Robots–A Survey","authors":"J. Ferreira, J. Dias","doi":"10.1109/TAMD.2014.2303072","DOIUrl":"https://doi.org/10.1109/TAMD.2014.2303072","url":null,"abstract":"This review intends to provide an overview of the state of the art in the modeling and implementation of automatic attentional mechanisms for socially interactive robots. Humans assess and exhibit intentionality by resorting to multisensory processes that are deeply rooted within low-level automatic attention-related mechanisms of the brain. For robots to engage with humans properly, they should also be equipped with similar capabilities. Joint attention, the precursor of many fundamental types of social interactions, has been an important focus of research in the past decade and a half, therefore providing the perfect backdrop for assessing the current status of state-of-the-art automatic attentional-based solutions. Consequently, we propose to review the influence of these mechanisms in the context of social interaction in cutting-edge research work on joint attention. This will be achieved by summarizing the contributions already made in these matters in robotic cognitive systems research, by identifying the main scientific issues to be addressed by these contributions and analyzing how successful they have been in this respect, and by consequently drawing conclusions that may suggest a roadmap for future successful research efforts.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"6 1","pages":"110-125"},"PeriodicalIF":0.0,"publicationDate":"2014-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2014.2303072","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62762490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Law, Patricia Shaw, Mark H. Lee, Michael Sheldon
{"title":"From Saccades to Grasping: A Model of Coordinated Reaching Through Simulated Development on a Humanoid Robot","authors":"J. Law, Patricia Shaw, Mark H. Lee, Michael Sheldon","doi":"10.1109/TAMD.2014.2301934","DOIUrl":"https://doi.org/10.1109/TAMD.2014.2301934","url":null,"abstract":"Infants demonstrate remarkable talents in learning to control their sensory and motor systems. In particular the ability to reach to objects using visual feedback requires overcoming several issues related to coordination, spatial transformations, redundancy, and complex learning spaces. This paper describes a model of longitudinal development that covers the full sequence from blind motor babbling to successful grasping of seen objects. This includes the learning of saccade control, gaze control, torso control, and visually-elicited reaching and grasping in 3-D space. This paper builds on and extends our prior investigations into the development of gaze control, eye-hand coordination, the use of constraints to shape learning, and a schema memory system for the learning of sensorimotor experience. New contributions include our application of the LWPR algorithm to learn how movements of the torso affect the robot's representation of space, and the first use of the schema framework to enable grasping and interaction with objects. The results from our integration of these various components into an implementation of longitudinal development on an iCub robot show their ability to generate infant-like development, from a start point with zero coordination up to skilled spatial reaching in less than three hours.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"6 1","pages":"93-109"},"PeriodicalIF":0.0,"publicationDate":"2014-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2014.2301934","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62762258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Humanoid Tactile Gesture Production using a Hierarchical SOM-based Encoding","authors":"G. Pierris, T. Dahl","doi":"10.1109/TAMD.2014.2313615","DOIUrl":"https://doi.org/10.1109/TAMD.2014.2313615","url":null,"abstract":"The existence of cortical hierarchies has long since been established and the advantages of hierarchical encoding of sensor-motor data for control, have long been recognized. Less well understood are the developmental processes whereby such hierarchies are constructed and subsequently used. This paper presents a new algorithm for encoding sequential sensor and actuator data in a dynamic, hierarchical neural network that can grow to accommodate the length of the observed interactions. The algorithm uses a developmental robotics methodology as it extends the Constructivist Learning Architecture, a computational theory of infant cognitive development. This paper presents experimental data demonstrating how the extended algorithm goes beyond the original theory by supporting goal oriented control. The domain studied is the encoding and reproduction of tactile gestures in humanoid robots. In particular, we present results from using a Programming by Demonstration approach to encode a stroke gesture. Our results demonstrate how the novel encoding enables a Nao humanoid robot with a touch sensitive fingertip to successfully encode and reproduce a stroke gesture in the presence of perturbations from internal and external forces.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"6 1","pages":"153-167"},"PeriodicalIF":0.0,"publicationDate":"2014-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2014.2313615","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62762709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Connor Schenck, J. Sinapov, David Johnston, A. Stoytchev
{"title":"Which Object Fits Best? Solving Matrix Completion Tasks with a Humanoid Robot","authors":"Connor Schenck, J. Sinapov, David Johnston, A. Stoytchev","doi":"10.1109/TAMD.2014.2325822","DOIUrl":"https://doi.org/10.1109/TAMD.2014.2325822","url":null,"abstract":"Matrix completion tasks commonly appear on intelligence tests. Each task consists of a grid of objects, with one missing, and a set of candidate objects. The job of the test taker is to pick the candidate object that best fits in the empty square in the matrix. In this paper we explore methods for a robot to solve matrix completion tasks that are posed using real objects instead of pictures of objects. Using several different ways to measure distances between objects, the robot detected patterns in each task and used them to select the best candidate object. When using all the information gathered from all sensory modalities and behaviors, and when using the best method for measuring the perceptual distances between objects, the robot was able to achieve 99.44% accuracy over the posed tasks. This shows that the general framework described in this paper is useful for solving matrix completion tasks.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"6 1","pages":"226-240"},"PeriodicalIF":0.0,"publicationDate":"2014-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2014.2325822","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62762943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Boucenna, S. Anzalone, Elodie Tilmont, D. Cohen, M. Chetouani
{"title":"Learning of Social Signatures Through Imitation Game Between a Robot and a Human Partner","authors":"S. Boucenna, S. Anzalone, Elodie Tilmont, D. Cohen, M. Chetouani","doi":"10.1109/TAMD.2014.2319861","DOIUrl":"https://doi.org/10.1109/TAMD.2014.2319861","url":null,"abstract":"In this paper, a robot learns different postures by imitating several partners. We assessed the effect of the type of partners, i.e., adults, typically developing (TD) children and children with autism spectrum disorder (ASD), on robot learning during an imitation game. The experimental protocol was divided into two phases: 1) a learning phase, during which the robot produced a random posture and the partner imitated the robot; and 2) a phase in which the roles were reversed and the robot had to imitate the posture of the human partner. Robot learning was based on a sensory-motor architecture whereby neural networks (N.N.) enabled the robot to associate what it did with what it saw. Several metrics (i.e., annotation, the number of neurons needed to learn, and normalized mutual information) were used to show that the partners affected robot learning. The first result obtained was that learning was easier with adults than with both groups of children, indicating a developmental effect. Second, learning was more complex with children with ASD compared to both adults and TD children. Third, learning with the more complex partner first (i.e., children with ASD) enabled learning to be more easily generalized.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"6 1","pages":"213-225"},"PeriodicalIF":0.0,"publicationDate":"2014-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2014.2319861","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62762802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Model of Human Activity Automatization as a Basis of Artificial Intelligence Systems","authors":"A. Bielecki","doi":"10.1109/TAMD.2014.2319740","DOIUrl":"https://doi.org/10.1109/TAMD.2014.2319740","url":null,"abstract":"In this paper, a human activity automatization phenomenon is analyzed as a process as a result of which a cognitive structure is replaced by the equivalent reflexive structure. Such replacement plays an essential role as a mechanism that optimizes human mental processes according to their energetic and time consuming aspects. The main goal of the studies described in this paper is working out the algorithm that enables us to create the analogous mechanism in artificial intelligence (AI) systems. The solution would enable us to use in real time systems such AI systems, that, so far, could not have been used due to their high time consumption. The information metabolism theory (IMT) is the basis for the analysis. A cybernetic model of automatization, based on IMT, is introduced. There have been specified conditions according to which such solution is profitable. An automatization-type mechanism has been applied to IP traffic scanner and to a mutiagent system. As a result, time and memory properties of the systems have been improved significantly.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"6 1","pages":"169-182"},"PeriodicalIF":0.0,"publicationDate":"2014-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2014.2319740","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62763029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Successive Developmental Levels of Autobiographical Memory for Learning Through Social Interaction","authors":"G. Pointeau, Maxime Petit, Peter Ford Dominey","doi":"10.1109/TAMD.2014.2307342","DOIUrl":"https://doi.org/10.1109/TAMD.2014.2307342","url":null,"abstract":"A developing cognitive system will ideally acquire knowledge of its interaction in the world, and will be able to use that knowledge to construct a scaffolding for progressively structured levels of behavior. The current research implements and tests an autobiographical memory system by which a humanoid robot, the iCub, can accumulate its experience in interacting with humans, and extract regularities that characterize this experience. This knowledge is then used in order to form composite representations of common experiences. We first apply this to the development of knowledge of spatial locations, and relations between objects in space. We then demonstrate how this can be extended to temporal relations between events, including “before” and “after,” which structure the occurrence of events in time. In the system, after extended sessions of interaction with a human, the resulting accumulated experience is processed in an offline manner, in a form of consolidation, during which common elements of different experiences are generalized in order to generate new meanings. These learned meanings then form the basis for simple behaviors that, when encoded in the autobiographical memory, can form the basis for memories of shared experiences with the human, and which can then be reused as a form of game playing or shared plan execution.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"6 1","pages":"200-212"},"PeriodicalIF":0.0,"publicationDate":"2014-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2014.2307342","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62762635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Joshua Wainer, B. Robins, F. Amirabdollahian, K. Dautenhahn
{"title":"Using the Humanoid Robot KASPAR to Autonomously Play Triadic Games and Facilitate Collaborative Play Among Children With Autism","authors":"Joshua Wainer, B. Robins, F. Amirabdollahian, K. Dautenhahn","doi":"10.1109/TAMD.2014.2303116","DOIUrl":"https://doi.org/10.1109/TAMD.2014.2303116","url":null,"abstract":"This paper presents a novel design, implementation, and first evaluation of a triadic, collaborative game involving the humanoid robot, kinesics and synchronization in personal assistant robotics (KASPAR), playing games with pairs of children with autism. Children with autism have impaired social communication and social interaction skills which make it difficult for them to participate in many different forms of social and collaborative play. Our proof-of-concept 10-week, long term study demonstrates how a humanoid robot can be used to foster and support collaborative play among children with autism. In this work, KASPAR operates fully autonomously, and uses information on the state of the game and behavior of the children to engage, motivate, encourage, and advise pairs of children playing an imitation game. Results are presented from a first evaluation study which examined whether having pairs of children with autism play an imitative, collaborative game with a humanoid robot affected the way these children would play the same game without the robot. Our initial evaluation involved six children with autism who each participated in 23 controlled play sessions both with and without the robot, using a specially designed imitation-based collaborative game. In total 78 play sessions were run. Detailed observational analyses of the children's behaviors indicated that different pairs of children with autism showed improved social behaviors in playing with each other after they played as pairs with the robot KASPAR compared to before they did so. These results are encouraging and provide a proof-of-concept of using an autonomously operating robot to encourage collaborative skills among children with autism.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"6 1","pages":"183-199"},"PeriodicalIF":0.0,"publicationDate":"2014-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2014.2303116","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62762580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Approach to Subjective Computing: A Robot That Learns From Interaction With Humans","authors":"P. Grüneberg, Kenji Suzuki","doi":"10.1109/TAMD.2013.2271739","DOIUrl":"https://doi.org/10.1109/TAMD.2013.2271739","url":null,"abstract":"We present an approach to subjective computing for the design of future robots that exhibit more adaptive and flexible behavior in terms of subjective intelligence. Instead of encapsulating subjectivity into higher order states, we show by means of a relational approach how subjective intelligence can be implemented in terms of the reciprocity of autonomous self-referentiality and direct world-coupling. Subjectivity concerns the relational arrangement of an agent's cognitive space. This theoretical concept is narrowed down to the problem of coaching a reinforcement learning agent by means of binary feedback. Algorithms are presented that implement subjective computing. The relational characteristic of subjectivity is further confirmed by a questionnaire on human perception of the robot's behavior. The results imply that subjective intelligence cannot be externally observed. In sum, we conclude that subjective intelligence in relational terms is fully tractable and therefore implementable in artificial agents.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"6 1","pages":"5-18"},"PeriodicalIF":0.0,"publicationDate":"2014-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2013.2271739","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62761978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}