{"title":"Validating a Cortisol-Inspired Framework for Human-Robot Interaction with a Replication of the Still Face Paradigm","authors":"Sara Mongile, Ana Tanevska, F. Rea, A. Sciutti","doi":"10.1109/ICDL53763.2022.9962184","DOIUrl":"https://doi.org/10.1109/ICDL53763.2022.9962184","url":null,"abstract":"When interacting with others in our everyday life, we prefer the company of those who share with us the same desire of closeness and intimacy (or lack thereof), since this determines if our interaction will be more o less pleasant. This sort of compatibility can be inferred by our innate attachment style. The attachment style represents our characteristic way of thinking, feeling and behaving in close relationship, and other than behaviourally, it can also affect us biologically via our hormonal dynamics. When we are looking how to enrich humanrobot interaction (HRI), one potential solution could be enabling robots to understand their partners’ attachment style, which could then improve the perception of their partners and help them behave in an adaptive manner during the interaction. We propose to use the relationship between the attachment style and the cortisol hormone, to endow the humanoid robot iCub with an internal cortisol inspired framework that allows it to infer participant’s attachment style by the effect of the interaction on its cortisol levels (referred to as R-cortisol). In this work, we present our cognitive framework and its validation during the replication of a well-known paradigm on hormonal modulation in human-human interaction (HHI) - the Still Face paradigm.","PeriodicalId":274171,"journal":{"name":"2022 IEEE International Conference on Development and Learning (ICDL)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121646285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Takumi Takada, Wataru Shimaya, Y. Ohmura, Y. Kuniyoshi
{"title":"Disentangling Patterns and Transformations from One Sequence of Images with Shape-invariant Lie Group Transformer","authors":"Takumi Takada, Wataru Shimaya, Y. Ohmura, Y. Kuniyoshi","doi":"10.1109/ICDL53763.2022.9962232","DOIUrl":"https://doi.org/10.1109/ICDL53763.2022.9962232","url":null,"abstract":"An effective way to model the complex real world is to view the world as a composition of basic components of objects and transformations. Although humans through development understand the compositionality of the real world, it is extremely difficult to equip robots with such a learning mechanism. In recent years, there has been significant research on autonomously learning representations of the world using the deep learning; however, most studies have taken a statistical approach, which requires a large number of training data. Contrary to such existing methods, we take a novel algebraic approach for representation learning based on a simpler and more intuitive formulation that the observed world is the combination of multiple independent patterns and transformations that are invariant to the shape of patterns. Since the shape of patterns can be viewed as the invariant features against symmetric transformations such as translation or rotation, we can expect that the patterns can naturally be extracted by expressing transformations with symmetric Lie group transformers and attempting to reconstruct the scene with them. Based on this idea, we propose a model that disentangles the scenes into the minimum number of basic components of patterns and Lie transformations from only one sequence of images, by introducing the learnable shape-invariant Lie group transformers as transformation components. Experiments show that given one sequence of images in which two objects are moving independently, the proposed model can discover the hidden distinct objects and multiple shape-invariant transformations that constitute the scenes.","PeriodicalId":274171,"journal":{"name":"2022 IEEE International Conference on Development and Learning (ICDL)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129405004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Accelerating the Learning of TAMER with Counterfactual Explanations","authors":"Jakob Karalus, F. Lindner","doi":"10.1109/ICDL53763.2022.9962222","DOIUrl":"https://doi.org/10.1109/ICDL53763.2022.9962222","url":null,"abstract":"The capability to interactively learn from human feedback would enable agents in new settings. For example, even novice users could train service robots in new tasks naturally and interactively. Human-in-the-loop Reinforcement Learning (HRL) combines human feedback and Reinforcement Learning (RL) techniques. State-of-the-art interactive learning techniques suffer from slow learning speed, thus leading to a frustrating experience for the human. We approach this problem by extending the HRL framework TAMER for evaluative feedback with the possibility to enhance human feedback with two different types of counterfactual explanations (action and state based). We experimentally show that our extensions improve the speed of learning.","PeriodicalId":274171,"journal":{"name":"2022 IEEE International Conference on Development and Learning (ICDL)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133673097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}