Proceedings of the 5th International Conference on Human Agent Interaction最新文献

筛选
英文 中文
Expectations and First Experience with a Social Robot 对社交机器人的期望和第一次体验
Proceedings of the 5th International Conference on Human Agent Interaction Pub Date : 2017-10-17 DOI: 10.1145/3125739.3132610
Kristiina Jokinen, G. Wilcock
{"title":"Expectations and First Experience with a Social Robot","authors":"Kristiina Jokinen, G. Wilcock","doi":"10.1145/3125739.3132610","DOIUrl":"https://doi.org/10.1145/3125739.3132610","url":null,"abstract":"This paper concerns interaction with social robots and focuses on the evaluation of a robot application that allows users to access interesting information from Wikipedia. The evaluation method compares the users' expectations with their experience with the robot, and takes into account their self-declared previous experience with robots. The results show that most participants had an overall positive experience, even though the averages indicate a slight negative tendency related to expectations of the robot's behavior and being understood by the robot. Interestingly, the most experienced users seem to be the most critical.","PeriodicalId":346669,"journal":{"name":"Proceedings of the 5th International Conference on Human Agent Interaction","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123004079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Designing Emotionally Expressive Robots: A Comparative Study on the Perception of Communication Modalities 设计情感表达机器人:交流方式感知的比较研究
Proceedings of the 5th International Conference on Human Agent Interaction Pub Date : 2017-10-17 DOI: 10.1145/3125739.3125744
Christiana Tsiourti, A. Weiss, K. Wac, M. Vincze
{"title":"Designing Emotionally Expressive Robots: A Comparative Study on the Perception of Communication Modalities","authors":"Christiana Tsiourti, A. Weiss, K. Wac, M. Vincze","doi":"10.1145/3125739.3125744","DOIUrl":"https://doi.org/10.1145/3125739.3125744","url":null,"abstract":"Socially assistive agents, be it virtual avatars or robots, need to engage in social interactions with humans and express their internal emotional states, goals, and desires. In this work, we conducted a comparative study to investigate how humans perceive emotional cues expressed by humanoid robots through five communication modalities (face, head, body, voice, locomotion) and examined whether the degree of a robot's human-like embodiment affects this perception. In an online survey, we asked people to identify emotions communicated by Pepper - a highly human-like robot and Hobbit - a robot with abstract humanlike features. A qualitative and quantitative data analysis confirmed the expressive power of the face, but also demonstrated that body expressions or even simple head and locomotion movements could convey emotional information. These findings suggest that emotion recognition accuracy varies as a function of the modality, and a higher degree of anthropomorphism does not necessarily lead to a higher level of recognition accuracy. Our results further the understanding of how people respond to single communication modalities and have implications for designing recognizable multimodal expressions for robots.","PeriodicalId":346669,"journal":{"name":"Proceedings of the 5th International Conference on Human Agent Interaction","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126380643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Symbol Emergence in Robotics for Modeling Human-Agent Interaction 机器人学中用于人与智能体交互建模的符号涌现
Proceedings of the 5th International Conference on Human Agent Interaction Pub Date : 2017-10-17 DOI: 10.1145/3125739.3134522
T. Nagai
{"title":"Symbol Emergence in Robotics for Modeling Human-Agent Interaction","authors":"T. Nagai","doi":"10.1145/3125739.3134522","DOIUrl":"https://doi.org/10.1145/3125739.3134522","url":null,"abstract":"Human intelligence is deeply dependent on its physical body, and its development requires interaction between its own body and surrounding environment including other agents. However, it is still an open problem that how we can integrate the low level motor control and the high level symbol manipulation system. One of our research goals in the area called \"symbol emergence in robotics\" is to build a computational model of human intelligence from the motor control to the high level symbol manipulation. In this talk, an unsupervised on-line learning algorithm, which uses a hierarchical Bayesian framework for categorizing multimodal sensory signals such as audio, visual, and haptic information by robots, is introduced at first. The robot uses its physical body to grasp and observe an object from various viewpoints as well as listen to the sound during the observation. The basic algorithm for intelligence is to categorize the collected multimodal data so that the robot can infer unobserved information better and we call the generated categorizes as multimodal concepts. The latter half of this talk discusses an integrated computational model of human intelligence from the motor control to the high level cognition. The core idea is to integrate the multimodal concepts and reinforcement learning. Furthermore, this talk attempts to model communication within the same framework since the self-other discrimination process can be seen as the multimodal categorization of sensory-motor signals.","PeriodicalId":346669,"journal":{"name":"Proceedings of the 5th International Conference on Human Agent Interaction","volume":"47 7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130445540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Continuous Multi-Modal Interaction Causes Human-Robot Alignment 连续多模态交互导致人机对齐
Proceedings of the 5th International Conference on Human Agent Interaction Pub Date : 2017-10-17 DOI: 10.1145/3125739.3132599
Sebastian Wallkötter, Michael Joannou, Samuel Westlake, Tony Belpaeme
{"title":"Continuous Multi-Modal Interaction Causes Human-Robot Alignment","authors":"Sebastian Wallkötter, Michael Joannou, Samuel Westlake, Tony Belpaeme","doi":"10.1145/3125739.3132599","DOIUrl":"https://doi.org/10.1145/3125739.3132599","url":null,"abstract":"This study explores the effect of continuous interaction with a multi-modal robot on alignment in user dialogue. A game application of `20 Questions' was developed for a SoftBank Robotics NAO robot with supporting gestures, and a study was carried out in which subjects played a number of games. The robot's confidence of speech comprehension was logged and used to analyse the similarity between application legal dialogue and user speech. It was found that subjects significantly aligned their dialogue to the robot throughout continuous, multi-modal interaction.","PeriodicalId":346669,"journal":{"name":"Proceedings of the 5th International Conference on Human Agent Interaction","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128869664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Modeling Player Activity in a Physical Interactive Robot Game Scenario 在物理交互机器人游戏场景中建模玩家活动
Proceedings of the 5th International Conference on Human Agent Interaction Pub Date : 2017-10-17 DOI: 10.1145/3125739.3132608
E. S. Oliveira, Davide Orrù, T. Nascimento, Andrea Bonarini
{"title":"Modeling Player Activity in a Physical Interactive Robot Game Scenario","authors":"E. S. Oliveira, Davide Orrù, T. Nascimento, Andrea Bonarini","doi":"10.1145/3125739.3132608","DOIUrl":"https://doi.org/10.1145/3125739.3132608","url":null,"abstract":"We propose a quantitative human player model for Physically Interactive RoboGames that can account for the combination of the player activity (physical effort) and interaction level. The model is based on activity recognition and a description of the player interaction (proximity and body contraction index) with the robot co-player. Our approach has been tested on a dataset collected from a real, physical robot game, where activity patterns extracted by a custom 3-axis accelerometer sensor module and by the Microsoft Kinect sensor are used. The proposed model design aims at inspiring approaches that can consider the activity of a human player in lively games against robots and foster the design of robotic adaptive behavior capable of supporting her/his engagement in such type of games.","PeriodicalId":346669,"journal":{"name":"Proceedings of the 5th International Conference on Human Agent Interaction","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123621031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Endocrinological Responses to a New Interactive HMI for a Straddle-type Vehicle: A Pilot Study 对跨坐式交通工具新型交互式人机界面的内分泌反应:一项试点研究
Proceedings of the 5th International Conference on Human Agent Interaction Pub Date : 2017-10-17 DOI: 10.1145/3125739.3132588
Takashi Suegami, H. Sumioka, Fuminao Obayashi, Kyonosuke Ichii, Yoshinori Harada, Hiroshi Daimoto, A. Nakae, H. Ishiguro
{"title":"Endocrinological Responses to a New Interactive HMI for a Straddle-type Vehicle: A Pilot Study","authors":"Takashi Suegami, H. Sumioka, Fuminao Obayashi, Kyonosuke Ichii, Yoshinori Harada, Hiroshi Daimoto, A. Nakae, H. Ishiguro","doi":"10.1145/3125739.3132588","DOIUrl":"https://doi.org/10.1145/3125739.3132588","url":null,"abstract":"This paper hypothesized that a straddle-type vehicle (e.g., a motorcycle) would be a suitable platform for haptic human-machine interactions that elicits affective responses or positive modulations of human emotion. Based on this idea, a new human-machine interface (HMI) for a straddle-type vehicle was proposed for haptically interacting with a rider, together with other visual (design), tactile (texture and heat), and auditory features (sound). We investigated endocrine changes after playing a riding simulator with either new interactive HMI or typical HMI. The results showed, in comparison with the typical HMI, a significant decrease of salivary cortisol level was found after riding the interactive HMI. Salivary testosterone also tended to be reduced after riding the interactive HMI, with significant reduce in salivary DHEA. The results demonstrated that haptic interaction from a vehicle, as we hypothesized, can endocrinologically influence a rider and then may mitigate rider's stress and aggression.","PeriodicalId":346669,"journal":{"name":"Proceedings of the 5th International Conference on Human Agent Interaction","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129165564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring Mediation Effect of Mental Alertness for Expressive Lights: Preliminary Results of LED Light Animations on Intention to Buy Hedonic Products and Choose between Healthy and Unhealthy Food 探索表达光对心理警觉性的中介作用:LED灯光动画对快乐产品购买意愿和健康与不健康食品选择的初步结果
Proceedings of the 5th International Conference on Human Agent Interaction Pub Date : 2017-10-17 DOI: 10.1145/3125739.3132598
Sichao Song, S. Yamada
{"title":"Exploring Mediation Effect of Mental Alertness for Expressive Lights: Preliminary Results of LED Light Animations on Intention to Buy Hedonic Products and Choose between Healthy and Unhealthy Food","authors":"Sichao Song, S. Yamada","doi":"10.1145/3125739.3132598","DOIUrl":"https://doi.org/10.1145/3125739.3132598","url":null,"abstract":"Expressive light has been explored in a handful of previous studies as a means for robots, especially appearance- constrained robots that are not able to employ human-like expressions, to convey internal states and interact with people. However, it is still unknown how different light expressions can affect a person's perception and behavior. In this poster, we explore this research question by studying the effects of different expressive light animations on people's intention to buy hedonic products and how they choose between healthy and unhealthy food. Our preliminary results show that participants assigned to a positive and low arousal light animation condition had a higher intention of purchasing hedonic products and were inclined to choose unhealthy over healthy food. Such findings are in line with previous literature in marketing research, suggesting that mental alertness mediates the effect of external stimuli on a person's behavioral intentions. Future work is thus required to evaluate such findings in a human-robot interaction context.","PeriodicalId":346669,"journal":{"name":"Proceedings of the 5th International Conference on Human Agent Interaction","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128880974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Towards the Analysis of Movement Variability in Human-Humanoid Imitation Activities 仿人活动的运动变异性分析
Proceedings of the 5th International Conference on Human Agent Interaction Pub Date : 2017-10-17 DOI: 10.1145/3125739.3132595
Miguel P. Xochicale, Chris Baber
{"title":"Towards the Analysis of Movement Variability in Human-Humanoid Imitation Activities","authors":"Miguel P. Xochicale, Chris Baber","doi":"10.1145/3125739.3132595","DOIUrl":"https://doi.org/10.1145/3125739.3132595","url":null,"abstract":"In this paper, we present preliminary results for the analysis of movement variability in human-humanoid imitation activities. We applied the state space reconstruction's theorem which help us to have better understanding of the movement variability than other techniques in time or frequency domains. In our experiments, we tested our hypothesis where participants, even performing the same arm movement, presented slight differences in the way they moved. With this in mind, we asked eighteen participants to copy NAO's arm movements while we collected data from inertial sensors attached to the participants' wrists and estimated the head pose using the OpenFace framework. With the proposed metric, we found that sixteen out of eighteen participants imitate the robot well by moving their arms symmetrically and by keeping their heads static; two participants however moved their head in a synchronous way even when the robot's head was completely static and two different participants moved their arms asymetrically to the robot. Although the work is in its early stage, we believe that such preliminary results are promising for applications in rehabilitation, sport science, entertainment or education.","PeriodicalId":346669,"journal":{"name":"Proceedings of the 5th International Conference on Human Agent Interaction","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131330660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Graphical Digital Personal Assistant that Grounds and Learns Autonomously 一个图形数字个人助理的基础和自主学习
Proceedings of the 5th International Conference on Human Agent Interaction Pub Date : 2017-10-17 DOI: 10.1145/3125739.3132592
C. Kennington, Aprajita Shukla
{"title":"A Graphical Digital Personal Assistant that Grounds and Learns Autonomously","authors":"C. Kennington, Aprajita Shukla","doi":"10.1145/3125739.3132592","DOIUrl":"https://doi.org/10.1145/3125739.3132592","url":null,"abstract":"We present a speech-driven digital personal assistant that is robust despite little or no training data and autonomously improves as it interacts with users. The system is able to establish and build common ground between itself and users by signaling understanding and by learning a mapping via interaction between the words that users actually speak and the system actions. We evaluated our system with real users and found an overall positive response. We further show through objective measures that autonomous learning improves performance in a simple itinerary filling task.","PeriodicalId":346669,"journal":{"name":"Proceedings of the 5th International Conference on Human Agent Interaction","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128866424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Active Perception based on Energy Minimization in Multimodal Human-robot Interaction 基于能量最小化的多模态人机交互主动感知
Proceedings of the 5th International Conference on Human Agent Interaction Pub Date : 2017-10-17 DOI: 10.1145/3125739.3125757
Takato Horii, Y. Nagai, M. Asada
{"title":"Active Perception based on Energy Minimization in Multimodal Human-robot Interaction","authors":"Takato Horii, Y. Nagai, M. Asada","doi":"10.1145/3125739.3125757","DOIUrl":"https://doi.org/10.1145/3125739.3125757","url":null,"abstract":"Humans use various types of modalities to express own internal states. If a robot interacting with humans can pay attention to limited signals, it should select more informative ones to estimate the partners' states. We propose an active perception method that controls the robot's attention based on an energy minimization criterion. An energy-based model, which has learned to estimate the latent state from sensory signals, calculates energy values corresponding to occurrence probabilities of the signals; The lower the energy is, the higher the likelihood of them. Our method therefore selects the modality that provides the lowest expectation energy among available ones to exploit more frequent experiences. We employed a multimodal deep belief network to represent relationships between humans' states and expressions. Our method demonstrated better performance for the modality selection than other methods in a task of emotion estimation. We discuss the potential of our method to advance human-robot interaction.","PeriodicalId":346669,"journal":{"name":"Proceedings of the 5th International Conference on Human Agent Interaction","volume":"18 2 Suppl 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131162734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信