{"title":"Master-Slave Guidewire and Catheter Robotic System for Cardiovascular Intervention","authors":"Yujia Xiang, Hao Shen, Le Xie, Hesheng Wang","doi":"10.1109/RO-MAN46459.2019.8956423","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956423","url":null,"abstract":"Cardiovascular disease remains a primary cause of morbidity globally. Percutaneous coronary intervention plays a crucial role in the treatment. The radiation exposure of surgeons during the cardiovascular intervention can be avoided by master-slave surgical robots. This paper introduces a master- slave guidewire and catheter robotic system to protect the surgeons from X ray radiation to the most extent. And the jitters of master manipulators are mitigated by Kalman filtering algorithm. With two master manipulators, it helps to retain the surgeon’s traditional operating habits. Also, a vascular model trial was conducted to validate that this interventional robotic system could complete the alternate progress and rotation of interventional guidewire and catheter.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133908461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Y. Ishii, Tetsushi Ikeda, Toru Kobayashi, Y. Kato, A. Utsumi, Isamu Nagasawa, S. Iwaki
{"title":"Investigation of the driver’s seat that displays future vehicle motion","authors":"Y. Ishii, Tetsushi Ikeda, Toru Kobayashi, Y. Kato, A. Utsumi, Isamu Nagasawa, S. Iwaki","doi":"10.1109/RO-MAN46459.2019.8956338","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956338","url":null,"abstract":"Automated driving reduces the burden on the driver, however also makes it difficult for the driver to understand the current situation and predict the future movement of the vehicle. When the acceleration due to automated driving occurs without future prediction, the driver’s anxiety and discomfort are increased compared to the case in manual driving. To facilitate the prediction of the future behavior of the vehicle by the driver, this paper aims to design and evaluate a haptic interface that actuates the vehicle seat. Our system displays to the driver the movement of the vehicle a few seconds in the future, which allows the driver to make predictions and preparations. Using a driving simulator, we compared the conditions where the movement of the car was displayed in advance for the length of different time. The subjective evaluation of the driver showed that the predictability of the behavior of the vehicle were significantly increased compared to the case without display. The experiment also showed that comfortable feeling significantly decreased if the preceding display is too early.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115537049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Novel Image-based Path Planning Algorithm for Eye-in-Hand Visual Servoing of a Redundant Manipulator in a Human Centered Environment","authors":"Deepak Raina, P. Mithun, S. Shah, S. Kumar","doi":"10.1109/RO-MAN46459.2019.8956330","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956330","url":null,"abstract":"This paper presents a novel image-based path-planning and execution framework for vision-based control of a robot in a human centered environment. The proposed method involves applying Rapidly-exploring Random Tree (RRT) exploration to perform Image-Based Visual Servoing (IBVS) while satisfying multiple task constraints by exploiting robot redundancy. The methodology incorporates data-set of robot’s workspace images for path-planning and design a controller based on visual servoing framework. This method is generic enough to include constraints like Field-of-View (FoV) limits, joint limits, obstacles, various singularities, occlusions etc. in the planning stage itself using task function approach and thereby avoiding them during the execution. The use of path-planning eliminates many of the inherent limitations of IBVS with eye-in-hand configuration and makes the use of visual servoing practical for dynamic and complex environments. Several experiments have been performed on a UR5 robotic manipulator to demonstrate that it is an effective and robust way to guide a robot in such environments.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"182 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115580920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Trovato, Franco Pariasca, R. Ramirez, Javier Cerna, V. Reutskiy, Laureano Rodriguez, F. Cuéllar
{"title":"Communicating with SanTO – the first Catholic robot","authors":"G. Trovato, Franco Pariasca, R. Ramirez, Javier Cerna, V. Reutskiy, Laureano Rodriguez, F. Cuéllar","doi":"10.1109/RO-MAN46459.2019.8956250","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956250","url":null,"abstract":"In the 1560s Philip II of Spain commissioned the realisation of a “mechanical monk”, a small humanoid automaton with the ability to move and walk. Centuries later, we present a Catholic humanoid robot. With the appearance of a statue of a saint and some interactive features, it is designed for Christian Catholic users for a variety of purposes. Its creation offers new insights on the concept of sacredness applied to a robot and the role of automation in religion. In this paper we present its concept, its functioning, and a preliminary test. A dialogue system, integrated within the multimodal communication consisting of vision, touch, voice and lights, drives the interaction with the users. We collected the first responses, particularly focused on the impression of sacredness of the robot, during an experiment that took place in a church in Peru.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114186493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"TeachMe: Three-phase learning framework for robotic motion imitation based on interactive teaching and reinforcement learning","authors":"Taewoo Kim, Joo-Haeng Lee","doi":"10.1109/RO-MAN46459.2019.8956326","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956326","url":null,"abstract":"Motion imitation is a fundamental communication skill for a robot; especially, as a nonverbal interaction with a human. Owing to kinematic configuration differences between the human and the robot, it is challenging to determine the appropriate mapping between the two pose domains. Moreover, technical limitations while extracting 3D motion details, such as wrist joint movements from human motion videos, results in significant challenges in motion retargeting. Explicit mapping over different motion domains indicates a considerably inefficient solution. To solve these problems, we propose a three-phase reinforcement learning scheme to enable a NAO robot to learn motions from human pose skeletons extracted from video inputs. Our learning scheme consists of three phases: (i) phase one for learning preparation, (ii) phase two for a simulation-based reinforcement learning, and (iii) phase three for a human-in-the-loop-based reinforcement learning. In phase one, embeddings of the motions of a human skeleton and robot are learned by an autoencoder. In phase two, the NAO robot learns a rough imitation skill using reinforcement learning that translates the learned embeddings. In the last phase, the robot learns motion details that were not considered in the previous phases by interactively setting rewards based on direct teaching instead of the method used in the previous phase. Especially, it is to be noted that a relatively smaller number of interactive inputs are required for motion details in phase three when compared to the large volume of training sets required for overall imitation in phase two. The experimental results demonstrate that the proposed method improves the imitation skills efficiently for hand waving and saluting motions obtained from NTU-DB.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127552824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Audio-Visual SLAM towards Human Tracking and Human-Robot Interaction in Indoor Environments","authors":"Aaron D. Chau, Kouhei Sekiguchi, Aditya Arie Nugraha, Kazuyoshi Yoshii, Kotaro Funakoshi","doi":"10.1109/RO-MAN46459.2019.8956321","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956321","url":null,"abstract":"We propose a novel audio-visual simultaneous and localization (SLAM) framework that exploits human pose and acoustic speech of human sound sources to allow a robot equipped with a microphone array and a monocular camera to track, map, and interact with human partners in an indoor environment. Since human interaction is characterized by features perceived in not only the visual modality, but the acoustic modality as well, SLAM systems must utilize information from both modalities. Using a state-of-the-art beamforming technique, we obtain sound components correspondent to speech and noise; and estimate the Direction-of-Arrival (DoA) estimates of active sound sources as useful representations of observed features in the acoustic modality. Through estimated human pose by a monocular camera, we obtain the relative positions of humans as representation of observed features in the visual modality. Using these techniques, we attempt to eliminate restrictions imposed by intermittent speech, noisy periods, reverberant periods, triangulation of sound-source range, and limited visual field-of-views; and subsequently perform early fusion on these representations. We develop a system that allows for complimentary action between audio-visual sensor modalities in the simultaneous mapping of multiple human sound sources and the localization of observer position.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"7 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126082478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Recchiuto, C. Papadopoulos, Tetiana Hill, Nina Castro, Barbara Bruno, I. Papadopoulos, A. Sgorbissa
{"title":"Designing an Experimental and a Reference Robot to Test and Evaluate the Impact of Cultural Competence in Socially Assistive Robotics","authors":"C. Recchiuto, C. Papadopoulos, Tetiana Hill, Nina Castro, Barbara Bruno, I. Papadopoulos, A. Sgorbissa","doi":"10.1109/RO-MAN46459.2019.8956440","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956440","url":null,"abstract":"The article focusses on the work performed in preparation for an experimental trial aimed at evaluating the impact of a culturally competent robot for care home assistance. Indeed, it has been estabilished that the user’s cultural identity plays an important role during the interaction with a robotic system and cultural competence may be one of the key elements for increasing capabilities of socially assistive robots. Specifically, the paper describes part of the work carried out for the definition and implementation of two different robotic systems for the care of older adults: a culturally competent robot, that shows its awareness of the user’s cultural identity, and a reference robot, non culturally competent, but with the same functionalities of the former. The design of both robots is here described in detail, together with the key elements that make a socially assistive robot culturally competent, which should be absent in the non-culturally competent counterpart. Examples of the experimental phase of the CARESSES project, with a fictional user are reported, giving a hint of the validness of the proposed approach.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121564642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Akishige Yuguchi, Tomoaki Inoue, G. A. G. Ricardez, Ming Ding, J. Takamatsu, T. Ogasawara
{"title":"Real-Time Gazed Object Identification with a Variable Point of View Using a Mobile Service Robot","authors":"Akishige Yuguchi, Tomoaki Inoue, G. A. G. Ricardez, Ming Ding, J. Takamatsu, T. Ogasawara","doi":"10.1109/RO-MAN46459.2019.8956451","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956451","url":null,"abstract":"As sensing and image recognition technologies advance, the environments where service robots operate expand into human-centered environments. Since the roles of service robots depend on the user situations, it is important for the robots to understand human intentions. Gaze information, such as gazed objects (i. e., the objects humans are looking at) can help to understand the users’ intentions. In this paper, we propose a real-time gazed object identification method from RGBD images captured by a camera mounted on a mobile service robot. First, we search for the candidate gazed objects using state-of-the-art, real-time object detection. Second, we estimate the human face direction using facial landmarks extracted by a real-time face detection tool. Then, by searching for an object along the estimated face direction, we identify the gazed object. If the gazed object identification fails even though a user is looking at an object, i. e., has a fixed gaze direction, the robot can determine whether the object is inside or outside the robot’s view based on the face direction, and, then, change its point of view to improve the identification. Finally, through multiple evaluation experiments with the mobile service robot Pepper, we verified the effectiveness of the proposed identification and the improvement of the identification accuracy by changing the robot’s point of view.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125014032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kerstin S Haring, Jessica Tobias, Justin Waligora, Elizabeth Phillips, N. Tenhundfeld, Gale M. Lucas, E. D. Visser, J. Gratch, Chad C. Tossell
{"title":"Conflict Mediation in Human-Machine Teaming: Using a Virtual Agent to Support Mission Planning and Debriefing","authors":"Kerstin S Haring, Jessica Tobias, Justin Waligora, Elizabeth Phillips, N. Tenhundfeld, Gale M. Lucas, E. D. Visser, J. Gratch, Chad C. Tossell","doi":"10.1109/RO-MAN46459.2019.8956414","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956414","url":null,"abstract":"Socially intelligent artificial agents and robots are anticipated to become ubiquitous in home, work, and military environments. With the addition of such agents to human teams it is crucial to evaluate their role in the planning, decision making, and conflict mediation processes. We conducted a study to evaluate the utility of a virtual agent that provided mission planning support in a three-person human team during a military strategic mission planning scenario. The team consisted of a human team lead who made the final decisions and three supporting roles, two humans and the artificial agent. The mission outcome was experimentally designed to fail and introduced a conflict between the human team members and the leader. This conflict was mediated by the artificial agent during the debriefing process through discuss or debate and open communication strategies of conflict resolution [1]. Our results showed that our teams experienced conflict. The teams also responded socially to the virtual agent, although they did not find the agent beneficial to the mediation process. Finally, teams collaborated well together and perceived task proficiency increased for team leaders. Socially intelligent agents show potential for conflict mediation, but need careful design and implementation to improve team processes and collaboration.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"223 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123253433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Optimal Feature Selection for EMG-Based Finger Force Estimation Using LightGBM Model","authors":"Yuhang Ye, Chao Liu, N. Zemiti, Chenguang Yang","doi":"10.1109/RO-MAN46459.2019.8956453","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956453","url":null,"abstract":"Electromyogram (EMG) signal has been long used in human-robot interface in literature, especially in the area of rehabilitation. Recent rapid development in artificial intelligence (AI) has provided powerful machine learning tools to better explore the rich information embedded in EMG signals. For our specific application task in this work, i.e. estimate human finger force based on EMG signal, a LightGBM (Gradient Boosting Machine) model has been used. The main contribution of this study is the development of an objective and automatic optimal feature selection algorithm that can minimize the number of features used in the LightGBM model in order to simplify implementation complexity, reduce computation burden and maintain comparable estimation performance to the one with full features. The performance of the LightGBM model with selected optimal features is compared with 4 other popular machine learning models based on a dataset including 45 subjects in order to show the effectiveness of the developed feature selection method.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"144 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123471873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}