Proceedings of the 1st ACM SIGCHI International Workshop on Investigating Social Interactions with Artificial Agents最新文献

筛选
英文 中文
Multi-modal social interaction recognition using view-invariant features 基于视点不变特征的多模态社会互动识别
Rim Trabelsi, Jagannadan Varadarajan, Yong Pei, Le Zhang, I. Jabri, A. Bouallègue, P. Moulin
{"title":"Multi-modal social interaction recognition using view-invariant features","authors":"Rim Trabelsi, Jagannadan Varadarajan, Yong Pei, Le Zhang, I. Jabri, A. Bouallègue, P. Moulin","doi":"10.1145/3139491.3139501","DOIUrl":"https://doi.org/10.1145/3139491.3139501","url":null,"abstract":"This paper addresses the issue of analyzing social interactions between humans in videos. We focus on recognizing dyadic human interactions through multi-modal data, specifically, depth, color and skeleton sequences. Firstly, we introduce a new person-centric proxemic descriptor, named PROF, extracted from skeleton data able to incorporate intrinsic and extrinsic distances between two interacting persons in a view-variant scheme. Then, a novel key frame selection approach is introduced to identify salient instants of the interaction sequence based on the joint energy. From RGBD videos, more holistic CNN features are extracted by applying an adaptive pre-trained CNNs on optical flow frames. Features from three modalities are combined then classified using linear SVM. Finally, extensive experiments have been carried on two multi-modal and multi-view interactions datasets prove the robustness of the introduced approach comparing to state-of-the-art methods.","PeriodicalId":121205,"journal":{"name":"Proceedings of the 1st ACM SIGCHI International Workshop on Investigating Social Interactions with Artificial Agents","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131975068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integration and evaluation of social competences such as humor in an artificial interactive agent 整合和评估社会能力,如幽默在一个人工互动代理
Matthieu Riou, B. Jabaian, Stéphane Huet, T. Chaminade, F. Lefèvre
{"title":"Integration and evaluation of social competences such as humor in an artificial interactive agent","authors":"Matthieu Riou, B. Jabaian, Stéphane Huet, T. Chaminade, F. Lefèvre","doi":"10.1145/3139491.3139495","DOIUrl":"https://doi.org/10.1145/3139491.3139495","url":null,"abstract":"In this paper, we present a brief overview of our ongoing work about artificial interactive agents and their adaptation to users. Several possibilities to introduce humorous productions in a spoken dialog system are investigated in order to enhance naturalness during social interactions between the agent and the user. We finally describe our plan on how neuroscience will help to better evaluate the proposed systems, both objectively and subjectively.","PeriodicalId":121205,"journal":{"name":"Proceedings of the 1st ACM SIGCHI International Workshop on Investigating Social Interactions with Artificial Agents","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123743204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Who has to do it? the use of personal pronouns in human-human and human-robot-interaction 谁来做?人与机器人互动中人称代词的使用
Brigitte Krenn, Stephanie Gross, L. Nussbaumer
{"title":"Who has to do it? the use of personal pronouns in human-human and human-robot-interaction","authors":"Brigitte Krenn, Stephanie Gross, L. Nussbaumer","doi":"10.1145/3139491.3139502","DOIUrl":"https://doi.org/10.1145/3139491.3139502","url":null,"abstract":"In human communication, pronouns are an important means of perspective taking, and in particular in task-oriented communication personal pronouns are an indicator of who has to do what at a certain moment in a given task. The ability of handling task-related discourse is a factor for robots to interact with people in their homes in everyday life. Both, learning and resolution of personal pronouns pose a challenge for robot architectures as there has to be a permanent adaptation to the human interlocutor’s use of personal pronouns. Especially the use of ich, du, wir (I, you, we) may be irritating for the robot’s natural language processing system.","PeriodicalId":121205,"journal":{"name":"Proceedings of the 1st ACM SIGCHI International Workshop on Investigating Social Interactions with Artificial Agents","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128735764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Recognizing emotions in spoken dialogue with acoustic and lexical cues 通过声音和词汇线索识别口语对话中的情绪
Leimin Tian, Johanna D. Moore, Catherine Lai
{"title":"Recognizing emotions in spoken dialogue with acoustic and lexical cues","authors":"Leimin Tian, Johanna D. Moore, Catherine Lai","doi":"10.1145/3139491.3139497","DOIUrl":"https://doi.org/10.1145/3139491.3139497","url":null,"abstract":"Emotions play a vital role in human communications. Therefore, it is desirable for virtual agent dialogue systems to recognize and react to user's emotions. However, current automatic emotion recognizers have limited performance compared to humans. Our work attempts to improve performance of recognizing emotions in spoken dialogue by identifying dialogue cues predictive of emotions, and by building multimodal recognition models with a knowledge-inspired hierarchy. We conduct experiments on both spontaneous and acted dialogue data to study the efficacy of the proposed approaches. Our results show that including prior knowledge on emotions in dialogue in either the feature representation or the model structure is beneficial for automatic emotion recognition.","PeriodicalId":121205,"journal":{"name":"Proceedings of the 1st ACM SIGCHI International Workshop on Investigating Social Interactions with Artificial Agents","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125557050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Dialogue management in task-oriented dialogue systems 面向任务的对话系统中的对话管理
P. Blache
{"title":"Dialogue management in task-oriented dialogue systems","authors":"P. Blache","doi":"10.1145/3139491.3139507","DOIUrl":"https://doi.org/10.1145/3139491.3139507","url":null,"abstract":"This paper presents a new framework for implementing a dialogue manager, making it possible to infer new information in the course of the interaction as well as generating responses from the virtual agent. The approach relies on a specific organization of knowledge bases, including the creation of a common ground and a belief base. Moreover, the same type of rules implement both inference and control of the dialogue. This approach is implemented within a dialogue system for training doctors to break bad news (ACORFORMed).","PeriodicalId":121205,"journal":{"name":"Proceedings of the 1st ACM SIGCHI International Workshop on Investigating Social Interactions with Artificial Agents","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121684726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Analyses of the effects of agents' performing self-adaptors 代理执行自适应的效果分析
Tomoko Koda
{"title":"Analyses of the effects of agents' performing self-adaptors","authors":"Tomoko Koda","doi":"10.1145/3139491.3139503","DOIUrl":"https://doi.org/10.1145/3139491.3139503","url":null,"abstract":"This paper introduces the results of a series of experiments on the impression of agents that perform self-adaptors. Human-human interactions were video-taped and analyzed with respect to usage of different types of self-adaptors (relaxed/stressful), and gender-specific self-adaptors (masculine/feminine). We then implemented virtual agents that performed these self-adaptors. Evaluation of the interactions between humans and agents suggested: 1) Relaxed self-adaptors were more likely to prevent any deterioration in the perceived friendliness of the agents than agents without self-adaptors. 2) People with higher social skills harbor a higher perceived friendliness with agents that exhibited self-adaptors than people with lower social skills. 3) Impressions of interactions with agents are formed by mutual-interactions between the self-adaptors and the conversational content. 4) There are cultural differences in sensitivity to other culture's self-adaptors. 5) There is a dichotomy on the impression on the agents that perform gender-specific self-adaptors between participants’ gender.","PeriodicalId":121205,"journal":{"name":"Proceedings of the 1st ACM SIGCHI International Workshop on Investigating Social Interactions with Artificial Agents","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123791066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Challenges for adaptive dialogue management in the KRISTINA project KRISTINA项目中适应性对话管理的挑战
Louisa Pragst, Juliana Miehle, W. Minker, Stefan Ultes
{"title":"Challenges for adaptive dialogue management in the KRISTINA project","authors":"Louisa Pragst, Juliana Miehle, W. Minker, Stefan Ultes","doi":"10.1145/3139491.3139508","DOIUrl":"https://doi.org/10.1145/3139491.3139508","url":null,"abstract":"Access to health care related information can be vital and should be easily accessible. However, immigrants often have difficulties to obtain the relevant information due to language barriers and cultural differences. In the KRISTINA project, we address those difficulties by creating a socially competent multimodal dialogue system that can assist immigrants in getting information about health care related questions. Dialogue management, as core component responsible for the system behaviour, has a significant impact on the successful reception of such a system. Hence, this work presents the specific challenges of the KRISTINA project to adaptive dialogue management, namely the handling of a large dialogue domain and the cultural adaptability required by the envisioned dialogue system, and our approach to handling them.","PeriodicalId":121205,"journal":{"name":"Proceedings of the 1st ACM SIGCHI International Workshop on Investigating Social Interactions with Artificial Agents","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125194999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Social talk: making conversation with people and machine 社交谈话:与人和机器进行对话
E. Gilmartin, Marine Collery, Ketong Su, Yuyun Huang, Christy Elias, Benjamin R. Cowan, N. Campbell
{"title":"Social talk: making conversation with people and machine","authors":"E. Gilmartin, Marine Collery, Ketong Su, Yuyun Huang, Christy Elias, Benjamin R. Cowan, N. Campbell","doi":"10.1145/3139491.3139494","DOIUrl":"https://doi.org/10.1145/3139491.3139494","url":null,"abstract":"Social or interactive talk differs from task-based or instrumental interactions in many ways. Quantitative knowledge of these differences will aid the design of convincing human-machine interfaces for applications requiring machines to take on roles including social companions, healthcare providers, or tutors. We briefly review accounts of social talk from the literature. We outline a three part data collection of human-human, human-woz and human-machine dialogs incorporating light social talk and a guessing game. We finally describe our ongoing experiments on the corpus collected.","PeriodicalId":121205,"journal":{"name":"Proceedings of the 1st ACM SIGCHI International Workshop on Investigating Social Interactions with Artificial Agents","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133185276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
En route to a better integration and evaluation of social capacities in vocal artificial agents 为了更好地整合和评估语音人工智能的社会能力
F. Lefèvre
{"title":"En route to a better integration and evaluation of social capacities in vocal artificial agents","authors":"F. Lefèvre","doi":"10.1145/3139491.3139506","DOIUrl":"https://doi.org/10.1145/3139491.3139506","url":null,"abstract":"In this talk, work about vocal artificial agent ongoing in the Vocal Interaction Group at LIA, University of Avignon, is presented. A focus is made on the research line aiming at endowing such interactive agents with human-like social abilities. After a short overview of the state-of-the-art in spoken dialogue systems a summary of recent efforts to improve systems' development through online learning using social signals is proposed. Then two examples of skills favoring human-like social interactions are presented: firstly a new turn-taking management scheme based on incremental processing and reinforcement learning, then automatic generation and usage optimisaton of humor traits. These studies converge in enabling to develop interactive systems which could foster studies in human sciences to better understand specificities of human social communication.","PeriodicalId":121205,"journal":{"name":"Proceedings of the 1st ACM SIGCHI International Workshop on Investigating Social Interactions with Artificial Agents","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130698181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Greta: a conversing socio-emotional agent 格里塔:健谈的社会情感代理人
C. Pelachaud
{"title":"Greta: a conversing socio-emotional agent","authors":"C. Pelachaud","doi":"10.1145/3139491.3139902","DOIUrl":"https://doi.org/10.1145/3139491.3139902","url":null,"abstract":"To create socially aware virtual agents, we conduct research along two main research directions: 1) develop richer models of multimodal behaviors for the agent; 2) make the agent a more socially competent interlocutor.","PeriodicalId":121205,"journal":{"name":"Proceedings of the 1st ACM SIGCHI International Workshop on Investigating Social Interactions with Artificial Agents","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133144093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信