{"title":"Teleoperated Robot Coaching for Mindfulness Training: A Longitudinal Study","authors":"I. Bodala, Nikhil Churamani, H. Gunes","doi":"10.1109/RO-MAN50785.2021.9515371","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515371","url":null,"abstract":"Social robots are becoming incorporated in daily human lives, assisting in the promotion of the physical and mental wellbeing of individuals. To investigate the design and use of social robots for delivering mindfulness training, we develop a teleoperation framework that enables an experienced Human Coach (HC) to conduct mindfulness training sessions virtually, by replicating their upper-body and head movements onto the Pepper robot, in real-time. Pepper’s vision is mapped onto a Head-Mounted Display (HMD) worn by the HC and a bidirectional audio pipeline is set up, enabling the HC to communicate with the participants through the robot. To evaluate the participants’ perceptions of the teleoperated Robot Coach (RC), we study the interactions between a group of participants and the RC over 5 weeks and compare these with another group of participants interacting directly with the HC. Growth modelling analysis of this longitudinal data shows that the HC ratings are consistently greater than 4 (on a scale of 1 5) for all aspects while an increase is witnessed in the RC ratings over the weeks, for the Robot Motion and Conversation dimensions. Mindfulness training delivered by both types of coaching evokes positive responses from the participants across all the sessions, with the HC rated significantly higher than the RC on Animacy, Likeability and Perceived Intelligence. Participants’ personality traits such as Conscientiousness and Neuroticism are found to influence their perception of the RC. These findings enable an understanding of the differences between the perceptions of HC and RC delivering mindfulness training, and provide insights towards the development of robot coaches for improving the psychological wellbeing of individuals.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"7 1","pages":"939-944"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87413293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Maximizing Legibility in Stochastic Environments","authors":"Shuwa Miura, A. Cohen, S. Zilberstein","doi":"10.1109/RO-MAN50785.2021.9515318","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515318","url":null,"abstract":"Making an agent’s intentions clear from its observed behavior is crucial for seamless human-agent interaction and for increased transparency and trust in AI systems. Existing methods that address this challenge and maximize legibility of behaviors are limited to deterministic domains. We develop a technique for maximizing legibility in stochastic environments and illustrate that using legibility as an objective improves interpretability of agent behavior in several scenarios. We provide initial empirical evidence that human subjects can better interpret legible behavior.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"50 1","pages":"1053-1059"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86104045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Koki Ijuin, Kunihiro Ogata, Kentaro Watanabe, H. Miwa, Yoshinobu Yamamoto
{"title":"Proposing Remote Video Conversation System \"PARAPPA\": Delivering the Gesture and Body Posture with Rotary Screen*","authors":"Koki Ijuin, Kunihiro Ogata, Kentaro Watanabe, H. Miwa, Yoshinobu Yamamoto","doi":"10.1109/RO-MAN50785.2021.9515454","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515454","url":null,"abstract":"Globalization and the effect of recent infectious disease are changing the remote conversations as a new normal in business meetings, social provision and casual chatting. Previous research show that the remote conversations have a difficulty on showing the presence or attention to the other interlocutors, especially in the situation where the majority of interlocutor share the same place and one or a few interlocutor participate from difference place. This paper proposed \"Parappa\", a remote video conversation system, for those unbalanced condition by utilizing both physical and virtual approaches to share the nonverbal behaviors of remotely-participating interlocutor. The proposed system is constructed with a rotatable screen which projects the life-size avatar of remote interlocutor. The rotation of the screen represents the body posture and the projected avatar show the gesture. The results of preliminary analyses of eye gaze activities and frequency of the screen rotation during conversation suggests the possibilities that proposed system enables to show the presence of remote interlocutors.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"85 1","pages":"1060-1065"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85501477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An empirical study of how much a social robot increases the rate of valid responses in a questionnaire survey","authors":"Taiga Natori, T. Iio","doi":"10.1109/RO-MAN50785.2021.9515364","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515364","url":null,"abstract":"In this paper, we report that how much the presence of a robot increases the rate of valid responses in a questionnaire survey. We conducted a field trial in a university campus based on a single-case design method that is used in applied behavior analysis. We made two conditions: One was a condition that a robot interacting with people was installed (robot condition), and another was a condition that the robot was not installed (no-robot condition). We measured once a day for eight days, each day lasting 125 minutes from 11:25 to 13:30. The robot condition and the no-robot condition were assigned alternately to each day. The robot was controlled by an operator. The results showed that the valid response rates of the robot condition were 8.9%, 3.8%, 6.1%, and 1.9%, and those of the no-robot condition were 4.9%, 3.7%, 3.1%, and 1.8%. Considering both these results and complete answer rates, we found that although a robot can attract people’s attention and increase the response rates a little and short-term, the valid response rates do not increase so much as we expect because people who are attracted by the robot are likely to quit answering the questionnaire halfway. In order to increase the valid response rates, we will need to consider a new interaction design for preventing people from quitting their answer.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"53 1","pages":"951-956"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91328624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Extrapolating significance of text-based autonomous vehicle scenarios to multimedia scenarios and implications for user-centered design","authors":"K. Robinson, L. Robert, R. Eglash","doi":"10.1109/RO-MAN50785.2021.9515372","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515372","url":null,"abstract":"Extrapolation from low-fidelity design iterations is especially critical in HRI. An initial proposal for low-fidelity to higher fidelity extrapolation is developed using insights from cognitive multimedia learning theory to account for the effects of prototype medium and three types of cognitive demands. Inspired by Donald Norman and others, our proposal leverages tightly controlled and multi-authored scenarios through crowdsourcing to create additional potential evidence as a kind of experimental “stress test.” We motivate our proposal by investigating the intersection of emotion and human control, which is understudied outside of autonomous vehicles (AV) and HRI research. Evidence for positively moderated emotional effects in text-based AV scenarios as well as tentative evidence for our extrapolation proposal are identified.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"38 1","pages":"1159-1164"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90445354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Darja Stoeva, H. Frijns, M. Gelautz, Oliver Schürer
{"title":"Analytical Solution of Pepper’s Inverse Kinematics for a Pose Matching Imitation System","authors":"Darja Stoeva, H. Frijns, M. Gelautz, Oliver Schürer","doi":"10.1109/RO-MAN50785.2021.9515480","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515480","url":null,"abstract":"In this paper, a human-humanoid imitation system is proposed, with a focus on the kinematic model used for translating end effector positions to joint angles. The overall system comprises the humanoid robot Pepper and a Kinect v2 camera for capturing human 3D joint positions. The presented kinematic model is based on analytical solutions of Pepper’s inverse kinematics and also uses the forward kinematics. The aim of the paper is to provide insights into deriving the kinematics of robotic chains for the purpose of pose matching imitation, as well as accuracy evaluation of the derived forward and inverse kinematic solutions. The solutions of the inverse kinematics provide results with a mean error of approximately 0.2° for the angle solutions of the head joints, 0.7° for the arm joints, and 4° for the torso (leg) joints. The evaluated speed lies within a range of 0.002 to 0.08 ms. These results indicate that the presented kinematic model is an effective method for translating end effector positions to joint angles for our pose imitation application in real-time or close to it. Finally, we show preliminary results of the proposed imitation system and discuss future work.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"367 1","pages":"167-174"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77754315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mitsuo Komagata, Yutaro Imashiro, Ko Yamamoto, Yoshihiko Nakamura
{"title":"Preferred Oil and Ceramics Options for EHA Drive Systems and Computed Torque Control of an EHA-Driven Robot Manipulator","authors":"Mitsuo Komagata, Yutaro Imashiro, Ko Yamamoto, Yoshihiko Nakamura","doi":"10.1109/RO-MAN50785.2021.9515398","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515398","url":null,"abstract":"6-DOF robot manipulator Hydracer was developed to gain high output torque and high backdrivability by adopting electro-hydrostatic actuators, however control of overall system of Hydracer is not yet conducted. To achieve flexible force control of Hydracer, we worked on the system improvements: enhancement of reliability of ceramics components, reduction of internal leakage by considering the property of hydraulic oil, and the identification of inertial parameters to improve its controllability. By using identified parameters, flexible force control of Hydracer by zero-torque control with gravity compensation was realized which reveals the potential of safe human-robot interaction.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"21 1","pages":"540-545"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85062713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Three-Legged Reconfigurable Spherical Robot No.3","authors":"Supaphon Kamon, Natthaphon Bunathuek, Pudit Laksanacharoen","doi":"10.1109/RO-MAN50785.2021.9515319","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515319","url":null,"abstract":"A three-legged reconfigurable spherical robot No.3 is presented in this paper. The robot has three legs kept inside the two half hemispherical shells. The three legs can be extended by splitting the top half of the hemispherical shells using a linear actuator installed at the core of the spherical shape. Each leg is consisted of three of revolute joints. When three legs are on ground the robot can perform three legs crawl-kick walking gait (one leg in front and two legs in rear) and butterfly walking gait (two legs in front and one leg in rear). The robot is able to move on flat ground and get over small barrier with butterfly walking gait with the speed of 5.33 cm/s and 1.57 cm/s respectively. For crawl-kick walking gait, the robot can move a little faster with the speed of 6.15 cm/s on flat ground and 1.51 cm/s for crossing barrier.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"162 1","pages":"426-433"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78553894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Caio Mucchiani, P. Cacchione, M. Johnson, Ross Mead, Mark H. Yim
{"title":"Deployment of a Socially Assistive Robot for Assessment of COVID-19 Symptoms and Exposure at an Elder Care Setting","authors":"Caio Mucchiani, P. Cacchione, M. Johnson, Ross Mead, Mark H. Yim","doi":"10.1109/RO-MAN50785.2021.9515551","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515551","url":null,"abstract":"This work investigates the deployment of an affordable socially assistive robot (SAR) at an older adult day care setting for the screening of COVID-19 symptoms and exposure. Despite the focus on older adults, other stakeholders (clinicians and caregivers) were included in the study due to the need for daily COVID-19 screening. The investigation considered which aspects of human-robot-interaction (HRI) are relevant when designing social agents for patient screening. The implementation was based upon the current screening procedure adopted by the deployment facility, and translated into robot dialogues and gesturing motion. Post-interaction surveys with participants informed their preferences for the type of interaction and system usability. Observer surveys evaluated users’ reaction, verbal and physical engagement. Results indicated general acceptance of the social agent and possible improvements to the current version of the robot to encourage a broader adoption by the stakeholders.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"79 1","pages":"1189-1195"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76675349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Junya Nakanishi, Tomohisa Hazama, Jun Baba, Sichao Song, Y. Yoshikawa, H. Ishiguro
{"title":"Exploring Possibilities of Social Robot’s Interactive Services in the Case of a Hotel Room","authors":"Junya Nakanishi, Tomohisa Hazama, Jun Baba, Sichao Song, Y. Yoshikawa, H. Ishiguro","doi":"10.1109/RO-MAN50785.2021.9515380","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515380","url":null,"abstract":"To explore the interaction design of an autonomous social robot stationed in a hotel room, we conducted a Wizard of Oz study. We developed a teleoperated robotic system that appears to move autonomously through voice-to-synthesis processing. Comparing the evaluation of the latest autonomous case with one of these teleoperated cases, the results show that it is possible to construct a robotic system that is more highly rated in terms of warmth, competence, and enjoyment of conversation. The results also suggest novel forms of the hotel room robot’s interactive services, such as a hotel-life management service and conversation partner service as a role of a listening presence, which draws out and understands with the guests’ talk.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"29 1","pages":"925-930"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76762800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}