{"title":"Proceedings of the 1st ACM SIGCHI International Workshop on Investigating Social Interactions with Artificial Agents","authors":"T. Chaminade, Noël Ngyuen, M. Ochs, F. Lefèvre","doi":"10.1145/3139491","DOIUrl":"https://doi.org/10.1145/3139491","url":null,"abstract":"","PeriodicalId":121205,"journal":{"name":"Proceedings of the 1st ACM SIGCHI International Workshop on Investigating Social Interactions with Artificial Agents","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130670414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Dondrup, Ioannis V. Papaioannou, Jekaterina Novikova, Oliver Lemon
{"title":"Introducing a ROS based planning and execution framework for human-robot interaction","authors":"C. Dondrup, Ioannis V. Papaioannou, Jekaterina Novikova, Oliver Lemon","doi":"10.1145/3139491.3139500","DOIUrl":"https://doi.org/10.1145/3139491.3139500","url":null,"abstract":"Working in human populated environments requires fast and robust action selection and execution especially when deliberately trying to interact with humans. This work presents the combination of a high-level planner (ROSPlan) for action sequencing and automatically generated finite state machines (PNP) for execution. Using this combined system we are able to exploit the speed and robustness of the execution and the flexibility of the sequence generation and combine the positive aspects of both approaches.","PeriodicalId":121205,"journal":{"name":"Proceedings of the 1st ACM SIGCHI International Workshop on Investigating Social Interactions with Artificial Agents","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130043023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Body language without a body: nonverbal communication in technology mediated settings","authors":"A. Vinciarelli","doi":"10.1145/3139491.3139510","DOIUrl":"https://doi.org/10.1145/3139491.3139510","url":null,"abstract":"Cognitive and psychological processes underlying social interaction are built around face-to-face interactions, the only possible and available communication setting during the long evolutionary process that has resulted into Homo Sapiens. As the fraction of interactions that take place in technology mediated settings keeps increasing, it is important to investigate how the cognitive and psychological processes mentioned above - ultimately grounded into neural structures - act in and react to the new interaction settings. In particular, it is important to investigate whether nonverbal communication - one of the main channels through which people convey socially and psychologically relevant information - still plays a role in settings where natural nonverbal cues (facial expressions, vocalisations, gestures, etc.) are no longer available. Addressing such an issue has important implications not only for what concerns the understanding of cognition and psychology, but also for what concerns the design of interaction technology and the analysis of phenomena like cyberbullyism and viral diffusion of content that play an important role in nowadays society.","PeriodicalId":121205,"journal":{"name":"Proceedings of the 1st ACM SIGCHI International Workshop on Investigating Social Interactions with Artificial Agents","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126765866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Intimately intelligent virtual agents: knowing the human beyond sensory input","authors":"Deborah Richards","doi":"10.1145/3139491.3139505","DOIUrl":"https://doi.org/10.1145/3139491.3139505","url":null,"abstract":"Despite being in the era of Big Data, where our devices seem to anticipate and feed our every desire, intelligent virtual agents appear to lack intimate and important knowledge of their user. Current cognitive agent architectures usually include situation awareness that allows agents to sense their environment, including their human partner, and provide congruent empathic behaviours. Depending on the framework, agents may exhibit their own personality, culture, memories, goals and reasoning styles. However, tailored adaptive behaviours based on multi-dimensional and deep understanding of the human essential for enduring beneficial relationships in certain contexts are lacking. In this paper, examples are provided of what an agent may need to know about the human in the application domains of education, health and cybersecurity and the challenges around agent adaptation and acquisition of relevant data and knowledge.","PeriodicalId":121205,"journal":{"name":"Proceedings of the 1st ACM SIGCHI International Workshop on Investigating Social Interactions with Artificial Agents","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121596363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Brendan Spillane, E. Gilmartin, Christian Saam, Ketong Su, Benjamin R. Cowan, S. Lawless, V. Wade
{"title":"Introducing ADELE: a personalized intelligent companion","authors":"Brendan Spillane, E. Gilmartin, Christian Saam, Ketong Su, Benjamin R. Cowan, S. Lawless, V. Wade","doi":"10.1145/3139491.3139492","DOIUrl":"https://doi.org/10.1145/3139491.3139492","url":null,"abstract":"This paper introduces ADELE, a Personalized Intelligent Compan- ion designed to engage with users through spoken dialog to help them explore topics of interest. The system will maintain a user model of information consumption habits and preferences in order to (1) personalize the user’s experience for ongoing interactions, and (2) build the user-machine relationship to model that of a friendly companion. The paper details the overall research goal, existing progress, the current focus, and the long term plan for the project.","PeriodicalId":121205,"journal":{"name":"Proceedings of the 1st ACM SIGCHI International Workshop on Investigating Social Interactions with Artificial Agents","volume":"9 25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122114867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Catharine Oertel, Patrik Jonell, Kevin El Haddad, Éva Székely, Joakim Gustafson
{"title":"Using crowd-sourcing for the design of listening agents: challenges and opportunities","authors":"Catharine Oertel, Patrik Jonell, Kevin El Haddad, Éva Székely, Joakim Gustafson","doi":"10.1145/3139491.3139499","DOIUrl":"https://doi.org/10.1145/3139491.3139499","url":null,"abstract":"In this paper we are describing how audio-visual corpora recordings using crowd-sourcing techniques can be used for the audio-visual synthesis of attitudional non-verbal feedback expressions for virtual agents. We are discussing the limitations of this approach as well as where we see the opportunities for this technology.","PeriodicalId":121205,"journal":{"name":"Proceedings of the 1st ACM SIGCHI International Workshop on Investigating Social Interactions with Artificial Agents","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122638475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
E. Gilmartin, Brendan Spillane, Maria O'Reilly, Ketong Su, Christian Saam, Benjamin R. Cowan, N. Campbell, V. Wade
{"title":"Dialog acts in greeting and leavetaking in social talk","authors":"E. Gilmartin, Brendan Spillane, Maria O'Reilly, Ketong Su, Christian Saam, Benjamin R. Cowan, N. Campbell, V. Wade","doi":"10.1145/3139491.3139493","DOIUrl":"https://doi.org/10.1145/3139491.3139493","url":null,"abstract":"Conversation proceeds through dialogue moves or acts, and dialog act annotation can aid the design of artificial dialog. While many dialogs are task-based or instrumental, with clear goals, as in the case of a service encounter or business meeting, many are more interactional in nature, as in friendly chats or longer casual conversations. Early research on dialogue acts focussed on transactional or task-based dialogue but work is now expanding to social aspects of interaction. We review how dialog annotation schemes treat non-task elements of dialog -- greeting and leave-taking sequences in particular. We describe the collection and annotation, using the ISO Standard 24617-2 Semantic annotation framework, Part 2: Dialogue acts, of a corpus of 187 text dialogues and study the dialog acts used in greeting and leave-taking.","PeriodicalId":121205,"journal":{"name":"Proceedings of the 1st ACM SIGCHI International Workshop on Investigating Social Interactions with Artificial Agents","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129969532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lucile Bechade, Kevin El Haddad, Juliette Bourquin, S. Dupont, L. Devillers
{"title":"A corpus for experimental study of affect bursts in human-robot interaction","authors":"Lucile Bechade, Kevin El Haddad, Juliette Bourquin, S. Dupont, L. Devillers","doi":"10.1145/3139491.3139496","DOIUrl":"https://doi.org/10.1145/3139491.3139496","url":null,"abstract":"This paper presents a data collection carried out in the framework of the Joker Project. Interaction scenarios have been designed in order to study the e ects of a ect bursts in a human-robot interaction and to build a system capable of using multilevel a ect bursts in a human-robot interaction. We use two main audio expression cues: verbal (synthesised sentences) and nonverbal (a ect bursts). The nonverbal cues used are sounds expressing disgust, amusement, fear, misunderstanding and surprise. Three di erent intensity levels for each sound have been generating for each emotion.","PeriodicalId":121205,"journal":{"name":"Proceedings of the 1st ACM SIGCHI International Workshop on Investigating Social Interactions with Artificial Agents","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116510846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Could a virtual agent be warm and competent? investigating user's impressions of agent's non-verbal behaviours","authors":"Béatrice Biancardi, Angelo Cafaro, C. Pelachaud","doi":"10.1145/3139491.3139498","DOIUrl":"https://doi.org/10.1145/3139491.3139498","url":null,"abstract":"In this abstract we introduce the design of an experiment aimed at investigating how users' impressions of an embodied conversational agent are influenced by agent's non-verbal behaviour. We focus on impressions of warmth and competence, the two fundamental dimensions of social perception. Agent's gestures, arms rest poses and smile frequency are manipulated, as well as users' expectations about agent's competence. We hypothesize that user's judgments will differ according to his expectations, by following the Expectancy Violation Theory proposed by Burgoon and colleagues. We also hypothesize to replicate the results found in our previous study concerning human-human interaction, for example high frequency of smiles will elicit higher warmth and lower competence impressions compared to low frequency of smiles, while arms crossed will elicit low competence and low warmth impressions.","PeriodicalId":121205,"journal":{"name":"Proceedings of the 1st ACM SIGCHI International Workshop on Investigating Social Interactions with Artificial Agents","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124639709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"How do artificial agents think?","authors":"T. Chaminade","doi":"10.1145/3139491.3139511","DOIUrl":"https://doi.org/10.1145/3139491.3139511","url":null,"abstract":"Anthropomorphic artificial agents, computed characters or humanoid robots, can be sued to investigate human cognition. They are intrinsically ambivalent. They appear and act as humans, hence we should tend to consider them as human, yet we know they are machine designed by humans, and should not consider them as humans. Reviewing a number of behavioral and neurophysiological studies provides insights into social mechanisms that are primarily influenced by the appearance of the agent, and in particular its resemblance to humans, and other mechanisms that are influenced by the knowledge we have about the artificial nature of the agent. A significant finding is that, as expected, humans don’t naturally adopt an intentional stance when interacting with artificial agents.","PeriodicalId":121205,"journal":{"name":"Proceedings of the 1st ACM SIGCHI International Workshop on Investigating Social Interactions with Artificial Agents","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128490258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}