J. Morita, Takatsugu Hirayama, K. Mase, Kazunori Yamada
{"title":"Model-based Reminiscence: Guiding Mental Time Travel by Cognitive Modeling","authors":"J. Morita, Takatsugu Hirayama, K. Mase, Kazunori Yamada","doi":"10.1145/2974804.2980492","DOIUrl":"https://doi.org/10.1145/2974804.2980492","url":null,"abstract":"This paper proposes an approach to elderly mental care called model-based reminiscence, which utilizes cognitive modeling to guide a user's mental time travel. In this approach, a personalized cognitive model is constructed by implementing a user's lifelog (a photo library) in the ACT-R cognitive architecture. The constructed model retrieves photos based on human memory characteristics such as learning, forgetting, inhibition, and noise. These memory characteristics are regulated with parameter values corresponding to cognitive and emotional health. The authors assumed that a user's mental health could be assessed from their reactions to photo sequences retrieved by models with various parameter settings. The authors also assumed that it would be possible to motivate a user by guiding their memory recall with photo sequences generated from a healthy optimal state model. A simulation study indicates the potential of this approach presenting a variety of model behaviors corresponding cognitive / emotional states.","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128408595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Takahiro Hirano, M. Shiomi, T. Iio, Mitsuhiko Kimoto, Takuya Nagashio, I. Tanev, K. Shimohara, N. Hagita
{"title":"Communication Cues in a Human-Robot Touch Interaction","authors":"Takahiro Hirano, M. Shiomi, T. Iio, Mitsuhiko Kimoto, Takuya Nagashio, I. Tanev, K. Shimohara, N. Hagita","doi":"10.1145/2974804.2974809","DOIUrl":"https://doi.org/10.1145/2974804.2974809","url":null,"abstract":"Haptic interaction is a key capability for social robots that closely interact with people in daily environments. Such human communication cues as gaze behaviors make haptic interaction look natural. Since the purpose of this study is to increase human-robot touch interaction, we conducted an experiment with 20 participants who interacted with a robot with different combinations of gaze behaviors and touch styles. The experimental results showed that both gaze behaviors and touch styles influence the changes in the perceived feelings of touch interaction with a robot.","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127388267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"\"I know how you performed!\": Fostering Engagement in a Gaming Situation Using Memory of Past Interaction","authors":"A. Kipp, F. Kummert","doi":"10.1145/2974804.2974818","DOIUrl":"https://doi.org/10.1145/2974804.2974818","url":null,"abstract":"Studying long-term human-robot interactions in the context of playing games can help answer many questions about how humans perceive robots. This paper presents the results of a study where the robot Flobi [11] plays a game of pairs against a human player and employs a memory with information about past interactions. The study focuses on long-term effects, namely the novelty effect, and how a memory with statistics about past game-plays can be used to cope with that effect. We also investigate how an autonomous interaction compares to a remotely controlled system that plays flawlessly. Results showed that providing information about how players performed throughout the interaction can help to keep them more interested and engaged. Nevertheless, results also showed that this information in combination with perfect playing skills tended to promote a more negative perception of the interaction and of the robot.","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"2393 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127476722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yumiko Shinohara, Katsuhiro Kubo, Momoyo Nozawa, Misa Yoshizaki, Tomomi Takahashi, Hirofumi Hayakawa, Atsushi Hirota, Yukiko Nishizaki, N. Oka
{"title":"The Optimum Rate of Mimicry in Human-Agent Interaction","authors":"Yumiko Shinohara, Katsuhiro Kubo, Momoyo Nozawa, Misa Yoshizaki, Tomomi Takahashi, Hirofumi Hayakawa, Atsushi Hirota, Yukiko Nishizaki, N. Oka","doi":"10.1145/2974804.2980506","DOIUrl":"https://doi.org/10.1145/2974804.2980506","url":null,"abstract":"The importance of building rapport between a human and an agent is increasing with the burgeoning development of robot technology. Several recent studies have focused on the chameleon effect, using psychological concepts to investigate human-agent interaction. However, the validity of the chameleon effect in human-agent interaction is controversial. Few studies have explored the influence of individual cognitive ability and the rate of mimicry on the human-agent interaction. We explored the optimal rate of mimicry and the relationship between mimicry rate and individual empathic ability. We controlled the amount of agent mimicry and examined the effect on participants classified as high- and low-perspective takers. We found that, overall, participants preferred agents that mimicked their behavior 83% of the time. Moreover, high-, but not low-, perspective takers tended to be influenced by the mimicry rate.","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125406308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Forming Intimate Human-Robot Relationships Through A Kissing Machine","authors":"E. Y. Zhang, A. Cheok","doi":"10.1145/2974804.2980513","DOIUrl":"https://doi.org/10.1145/2974804.2980513","url":null,"abstract":"Robots are increasingly becoming involved in people's lives as social companions or even romantic partners rather than mere productivity tools. To facilitate intimacy in human-robot relationships, technologies should enable human to have intimate physical interactions with robots, such as kissing. This paper presents a kissing machine that reproduces and transmits the haptic sensations of kissing. It provides a physical interface for human to form emotional and intimate connections with robots or virtual characters through kissing, and also acts as a remote agent for human to transmit kisses remotely through a communication network.","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131257939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Does a Conversational Robot Need to Have its own Values?: A Study of Dialogue Strategy to Enhance People's Motivation to Use Autonomous Conversational Robots","authors":"Takahisa Uchida, T. Minato, H. Ishiguro","doi":"10.1145/2974804.2974830","DOIUrl":"https://doi.org/10.1145/2974804.2974830","url":null,"abstract":"This work studies a dialogue strategy aimed at building people's motivation for talking with autonomous conversational robots. Even though spoken dialogue systems continue to develop rapidly, the existing systems are insufficient for continuous use because they fail to motivate users to talk with them. One reason is that users fail to realize that the intentions of the system's utterances are based on its values. Since people recognize the values of others and modify their own values in human-human conversations, we hypothesize that a dialogue strategy that makes users saliently feel the difference of their own values and those of the system will increase motivation for the dialogues. Our experiment, which evaluated human-human dialogues, supported our hypothesis. However, an experiment with human-android dialogues failed to produce identical results, suggesting that people did not attribute values to our android. For a conversational robot, we need additional techniques to convince people to believe a robot speaks based on its own values and opinions.","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134426837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Tracking Human Gestures under Field-of-View Constraints","authors":"K. Tee, Yuanwei Chua, Zhiyong Huang","doi":"10.1145/2974804.2980477","DOIUrl":"https://doi.org/10.1145/2974804.2980477","url":null,"abstract":"This paper presents a control design for a desktop telepresence robot that guarantees satisfaction of field-of-view (FOV) constraints when dynamically tracking multiple points of interest on a person.The multi-point tracking problem is solved by complementing centroid tracking with local constraint satisfaction that is achieved by local integral barrier functions active only in small regions near the FOV limits. Such a control provides an aggregate view of the points of interest on the person and ensures that none of them goes out of view. A simulation study illustrates the performance of the proposed control in comparison with a conventional control.","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133106115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Pre-scheduled Turn-Taking between Robots to Make Conversation Coherent","authors":"T. Iio, Y. Yoshikawa, H. Ishiguro","doi":"10.1145/2974804.2974819","DOIUrl":"https://doi.org/10.1145/2974804.2974819","url":null,"abstract":"Since a talking robot cannot escape from errors in recognizing user's speech in daily environment, its verbal responses are sometimes felt as incoherent with the context of conversation. This paper presents a solution to this problem that generates a social context where a user is guided to find coherency of the robot's utterances, even though its response is produced according to incorrect recognition of user's speech. We designed a novel turn-taking pattern in which two robots behave according to a pre-scheduled scenario to generate such a social context. Two experiments proved that participants who talked to two robots using that turn-taking pattern felt robot's responses to be more coherent than those who talked to one robot not using it; therefore, our proposed turn-taking pattern generated a social context for user's flexible interpretation of robot's responses. This result implies a potential of a multiple robots approach for improving the quality of human-robot conversation.","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123449421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Effects of Deformed Embodied Agent during Collaborative Interaction Tasks: Investigation on Subjective Feelings and Emotion","authors":"Aya Kitamura, Yugo Hayashi","doi":"10.1145/2974804.2980478","DOIUrl":"https://doi.org/10.1145/2974804.2980478","url":null,"abstract":"Designing embodied agents that are empathic and positive towards humans is important in Human Agent Interaction (HAI) and design factors need to be instigated based on experimental investigation. Agent design specificity, in which less specific animated designs are better than realistic designs, is one of the key factors that facilitate positive emotions during interactions. Focusing on this point, this study investigated the effects of a deformed embodied agent during a collaborative interaction task with the objective of understanding how subjective interpersonal states and emotional states change when deformed embodied agents are used instead of non-deformed agents. This was accomplished by developing an interactive communication task with the embodied agent and collecting subjective and emotional state data during the task. The results obtained indicate that deformed agents evoke impressions of closeness and produce higher arousal states.","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123667344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Qiu, S. A. Anas, Hirotaka Osawa, G.W.M. Rauterberg, Jun Hu
{"title":"Model-Driven Gaze Simulation for the Blind Person in Face-to-Face Communication","authors":"S. Qiu, S. A. Anas, Hirotaka Osawa, G.W.M. Rauterberg, Jun Hu","doi":"10.1145/2974804.2980482","DOIUrl":"https://doi.org/10.1145/2974804.2980482","url":null,"abstract":"In face-to-face communication, eye gaze is integral to a conversation to supplement verbal language. The sighted often uses eye gaze to convey nonverbal information in social interactions, which a blind conversation partner cannot access and react to them. In this paper, we present E-Gaze glasses (E-Gaze), an assistive device based on an eye tracking system. It simulates gaze for the blind person to react and engage the sighted in face-to-face conversations. It is designed based on a model that combines eye-contact mechanism and turn-taking strategy. We further propose an experimental design to test the E-Gaze and hypothesize that the model-driven gaze simulation can enhance the conversation quality between the sighted and the blind person in face-to-face communication.","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121933373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}