{"title":"A Conversational Robotic Approach to Dementia Symptoms: Measuring Its Effect on Older Adults","authors":"R. Yamazaki, H. Kase, S. Nishio, H. Ishiguro","doi":"10.1145/3349537.3351888","DOIUrl":"https://doi.org/10.1145/3349537.3351888","url":null,"abstract":"The purpose of this study was to investigate the effect of robotic mediation in promoting conversation of older adults with dementia for improving behavioral and psychological symptoms of dementia (BPSD). In the longitudinal study, we explored how the BPSD in older adults could be affected by use of a teleoperated android robot named Telenoid that could potentially promote conversation. This study was conducted at a long-term residential care facility, and it was a prospective, 10-week, single arm, exploratory user study. Diminishing the barrier of digital divide, the android robot provided those with dementia with a way to remotely communicate with others. BPSD were assessed using the Neuropsychiatric Inventory Nursing Home Version (NPI-NH), and we investigated changes in the participants over time. According to NPI-NH scores, Telenoid could improve BPSD in older adults with \"anxiety,\" and \"appetite and eating change.\" The results indicate that as a non-pharmacological approach, daily use of the conversation promoting android robot could have a therapeutic effect on the BPSD in older adults.","PeriodicalId":188834,"journal":{"name":"Proceedings of the 7th International Conference on Human-Agent Interaction","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114325664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Surface Recognition using a Composite Augmented Reality Marker","authors":"Daichi Shirata, K. Izumi, T. Tsujimura","doi":"10.1145/3349537.3352788","DOIUrl":"https://doi.org/10.1145/3349537.3352788","url":null,"abstract":"This reserch newly proposes an augmented reality system called \"composite marker\" to recognize curved objects obstructing within mobile robot workspace. Fundamental experiments are carried out to identify a sphere to verify its effectiveness.","PeriodicalId":188834,"journal":{"name":"Proceedings of the 7th International Conference on Human-Agent Interaction","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121056368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aoba Sato, Mitsuhiko Kimoto, T. Iio, K. Shimohara, M. Shiomi
{"title":"Preliminary Investigation of Pre-Touch Reaction Distances toward Virtual Agents","authors":"Aoba Sato, Mitsuhiko Kimoto, T. Iio, K. Shimohara, M. Shiomi","doi":"10.1145/3349537.3352796","DOIUrl":"https://doi.org/10.1145/3349537.3352796","url":null,"abstract":"This study addresses the pre-touch reaction distance effects in human-agent touch interaction in a VR environment. Past studies on human-agent interaction focused on post-touch situations, i.e., pre-touch situations received less attention, and such pre-touch situation was only investigated with a physical agent, i.e., robot. In this study, we conducted a preliminary data collection to investigate the minimum comfortable distance to virtual agent's touch by using VR application. For this purpose, we prepared two kinds of virtual agents (female and male) to investigate gender effects in touch settings. We analyzed the collected data to investigate about people's perceptions towards a touch from virtual agents, and the results showed similar phenomenon with a physical agent.","PeriodicalId":188834,"journal":{"name":"Proceedings of the 7th International Conference on Human-Agent Interaction","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131654355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Development of Tsunami Evacuation Simulation System for Disaster Prevention Plan in Urban Space","authors":"Yasuo Kawai, Yurie Kaizu","doi":"10.1145/3349537.3352790","DOIUrl":"https://doi.org/10.1145/3349537.3352790","url":null,"abstract":"At present, hazard maps are developed using computationally expensive techniques. After a major disaster, these maps must be revised. We developed a tsunami evacuation simulation system using a game engine and open data at low cost. An agent that performs evacuation actions, and autonomously searches for evacuation destinations and evacuation behaviors at specified speeds was developed, to clarify current issues with the location of evacuation sites. The agent prepared three types of walking speeds and disaster conditions, and two types of evacuation behaviors, and could freely change the number and ratio of agents. As a result of the simulation, it became clear that there are places where victims are concentrated even in inland areas.","PeriodicalId":188834,"journal":{"name":"Proceedings of the 7th International Conference on Human-Agent Interaction","volume":"249 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129111004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Accounting Social Cognitive Mechanisms by the Framework of Predictive Coding and Active Inference: A Synthetic Experimental Study using Robotics Interaction Platforms","authors":"J. Tani","doi":"10.1145/3349537.3357798","DOIUrl":"https://doi.org/10.1145/3349537.3357798","url":null,"abstract":"Our group has explored possible neuropsychological mechanisms for social cognition by using predictive coding and active inference frameworks [1]. For the purpose of gaining better understanding, we have taken so-called the synthetic robotics approach wherein a set of experiments have been conducted for robot-human as well as robot-robot interactions. Especially, we examine the underlying mechanisms accounting for spontaneous coupling and decoupling among agents as well as autonomous shifts from one social context to another. We investigate also how can novel or creative behaviors be co-developed by robots and human tutors through their developmental interactive tutoring processes. Finally, I address phenomenological aspects in social cognition from our preliminary examinations on how human can feel intention or free will of the robots or how the robots can possibly do so for the humans in the human-in-the-robot-loop experiment.","PeriodicalId":188834,"journal":{"name":"Proceedings of the 7th International Conference on Human-Agent Interaction","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132145535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"App-LSTM: Data-driven Generation of Socially Acceptable Trajectories for Approaching Small Groups of Agents","authors":"Fangkai Yang, Christopher E. Peters","doi":"10.1145/3349537.3351885","DOIUrl":"https://doi.org/10.1145/3349537.3351885","url":null,"abstract":"While many works involving human-agent interactions have focused on individuals or crowds, modelling interactions on the group scale has not been considered in depth. Simulation of interactions with groups of agents is vital in many applications, enabling more comprehensive and realistic behavior encompassing all possibilities between crowd and individual levels. In this paper, we propose a novel neural network App-LSTM to generate the approach trajectory of an agent towards a small free-standing conversational group of agents. The App-LSTM model is trained on a dataset of approach behaviors towards the group. Since current publicly available datasets for these encounters are limited, we develop a social-aware navigation method as a basis for creating a semi-synthetic dataset composed of a mixture of real and simulated data representing safe and socially-acceptable approach trajectories. Via a group interaction module, App-LSTM then captures the position and orientation features of the group and refines the current state of the approaching agent iteratively to better focus on the current intention of group members. We show our App-LSTM outperforms baseline methods in generating approaching group trajectories.","PeriodicalId":188834,"journal":{"name":"Proceedings of the 7th International Conference on Human-Agent Interaction","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130208074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuyuan Shi, Yin Chen, Liz Katherine Rincon Ardila, G. Venture, M. Bourguet
{"title":"A Visual Sensing Platform for Robot Teachers","authors":"Yuyuan Shi, Yin Chen, Liz Katherine Rincon Ardila, G. Venture, M. Bourguet","doi":"10.1145/3349537.3352764","DOIUrl":"https://doi.org/10.1145/3349537.3352764","url":null,"abstract":"This paper describes our ongoing work to develop a visual sensing platform that can inform a robot teacher about the behaviour and affective state of its student audience. We have developed a multi-student behaviour recognition system, which can detect behaviours such as \"listening\" to the lecturer, \"raising hand\", or \"sleeping\". We have also developed a multi-student affect recognition system which, starting from eight basic emotions detected from facial expressions, can infer higher emotional states relevant to a learning context, such as \"interested\", \"distracted\" and \"confused\". Both systems are being tested with the Softbank robot Pepper that can respond to various students' behaviours and emotional states with adapted movements, postures and speech.","PeriodicalId":188834,"journal":{"name":"Proceedings of the 7th International Conference on Human-Agent Interaction","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130217774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Utterances in Social Robot Interactions -- Correlation Analyses between Robot's Fluency and Participant's Impression","authors":"Koki Ijuin, Kristiina Jokinen","doi":"10.1145/3349537.3352792","DOIUrl":"https://doi.org/10.1145/3349537.3352792","url":null,"abstract":"the use of eye-gaze patterns in evaluating the partner's understanding process. The goal of the research is to understand better how humans focus their attention when interacting with a robot and to build a model for natural gaze patters to improve the robot's engagement and interaction capabilities.","PeriodicalId":188834,"journal":{"name":"Proceedings of the 7th International Conference on Human-Agent Interaction","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125664649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dialogue Systems for the Assessment of Language Learners' Productive Vocabulary","authors":"D. Tellols, Hitoshi Nishikawa, T. Tokunaga","doi":"10.1145/3349537.3352772","DOIUrl":"https://doi.org/10.1145/3349537.3352772","url":null,"abstract":"This paper proposes to use dialogue systems to assess language learners' productive vocabulary. We introduce a new task where dialogue systems try to induce learners to use specific words during a natural conversation to assess their productive vocabulary. To investigate the feasibility of the dialogue systems that are capable of this task, we performed two kinds of experiments.","PeriodicalId":188834,"journal":{"name":"Proceedings of the 7th International Conference on Human-Agent Interaction","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123407638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Communicating Emotional State and Personality with Eye-color and Light Intensity","authors":"Betty Tärning, Trond A. Tjøstheim, B. Johansson","doi":"10.1145/3349537.3352769","DOIUrl":"https://doi.org/10.1145/3349537.3352769","url":null,"abstract":"We conducted two experiments where subjects rated images of a robot head with different eye colors and light intensities on how well they communicate emotions like anger, enjoyment, and surprise, as well as personality traits like friendliness, intelligence, and level of trust. Results indicate e.g. that green and turquoise eye colors were more associated with agreeable personality traits. We found also that for sadness and disgust, dimming light intensity appears to communicate more intense feeling. Finally, red communicates negative emotions most saliently.","PeriodicalId":188834,"journal":{"name":"Proceedings of the 7th International Conference on Human-Agent Interaction","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128999663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}