2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)最新文献

筛选
英文 中文
Recommendation dialogue system through pragmatic argumentation 通过语用论证推荐对话系统
Ching-Ying Cheng, Xiaobei Qian, Shih-Huan Tseng, L. Fu
{"title":"Recommendation dialogue system through pragmatic argumentation","authors":"Ching-Ying Cheng, Xiaobei Qian, Shih-Huan Tseng, L. Fu","doi":"10.1109/ROMAN.2017.8172323","DOIUrl":"https://doi.org/10.1109/ROMAN.2017.8172323","url":null,"abstract":"In an ageing society, we expect that a robotic caregiver is able to persuade the elderly to perform a healthier behavior. In this work, pragmatic argument is adopted to make the elderly realize that a choice beneficial for health is really worthwhile, such as eating suitable fruits. Based on this concept, an adaptive recommendation dialogue system through pragmatic argumentation is proposed. There are three objectives in this system. First, a knowledge base for pragmatic argument construction is built, which concerns not only the effect of a decision but also the reason for the effect. Secondly, the robot is endowed with the ability to do recommendation that adapts to different states of the elder, and the recommendation is determined based on the integration of both the robot's and the elder's preference for different perspectives so that the robot knows how to reach a compromise with the elder. Lastly, through learning about the elder's preference for perspectives in conversation, the robot will try to select such a perspective to construct arguments that the elder can be more easily convinced to accept its recommendation. We invited 21 volunteers to interact with the robot. The experimental result has proved that the recommendation system has potential to affect the decision making of the elderly and help him/her pursue a healthier life.","PeriodicalId":134777,"journal":{"name":"2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131016961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Representing motion information from event-based cameras 表示来自基于事件的摄像机的运动信息
Keith Sullivan, W. Lawson
{"title":"Representing motion information from event-based cameras","authors":"Keith Sullivan, W. Lawson","doi":"10.1109/ROMAN.2017.8172497","DOIUrl":"https://doi.org/10.1109/ROMAN.2017.8172497","url":null,"abstract":"Many recent works have successfully leveraged motion information (i.e., dense optical flow) for a variety of problems. In this paper, we introduce a methodology to capture motion information using high-speed event-based cameras combined with convolutional neural networks (CNN). Our motion event features (MEFs) succinctly capture motion magnitude and direction in a form suitable for input into a CNN. We demonstrate the broad applicability of MEFs across two disparate problems: action recognition, and autonomous robot reactive control.","PeriodicalId":134777,"journal":{"name":"2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"144 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132688716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Learning through sharing and distributing knowledge with application to object recognition and information retrieval 通过分享和分发知识来学习,并应用于对象识别和信息检索
A. Mignon, Alban Bronisz, Ronan Le Hy, K. Mekhnacha, Luís Santos
{"title":"Learning through sharing and distributing knowledge with application to object recognition and information retrieval","authors":"A. Mignon, Alban Bronisz, Ronan Le Hy, K. Mekhnacha, Luís Santos","doi":"10.1109/ROMAN.2017.8172468","DOIUrl":"https://doi.org/10.1109/ROMAN.2017.8172468","url":null,"abstract":"The GrowMeUp project builds an assisted living environment based on a service robotics platform. The platform is able to learn the needs and habits of elderly persons; its functionalities evolve to help them to stay active, independent and socially involved longer. Following the recent interest in cloud-enhanced robotics, we present a general framework used to learn models by sharing and distributing knowledge between a cloud platform and a robots network. We also provide two concrete example services that take advantage of the cloud structure in order to enhance their performance.","PeriodicalId":134777,"journal":{"name":"2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131768040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The public's perception of humanlike robots: Online social commentary reflects an appearance-based uncanny valley, a general fear of a “Technology Takeover”, and the unabashed sexualization of female-gendered robots 公众对类人机器人的看法:网上的社会评论反映了一种基于外表的恐怖谷,一种对“技术接管”的普遍恐惧,以及对女性机器人毫不掩饰的性化
M. Strait, Cynthia Aguillon, Virginia Contreras, Noemi Garcia
{"title":"The public's perception of humanlike robots: Online social commentary reflects an appearance-based uncanny valley, a general fear of a “Technology Takeover”, and the unabashed sexualization of female-gendered robots","authors":"M. Strait, Cynthia Aguillon, Virginia Contreras, Noemi Garcia","doi":"10.1109/ROMAN.2017.8172490","DOIUrl":"https://doi.org/10.1109/ROMAN.2017.8172490","url":null,"abstract":"Towards understanding the public's perception of humanlike robots, we examined commentary on 24 YouTube videos depicting social robots ranging in human similarity — from Honda's Asimo to Hiroshi Ishiguro's Geminoids. In particular, we investigated how people have responded to the emergence of highly humanlike robots (e.g., Bina48) in contrast to those with more prototypically-“robotic” appearances (e.g., Asimo), coding the frequency at which the uncanny valley versus fears of replacement and/or a “technology takeover” arise in online discourse based on the robot's appearance. Here we found that, consistent with Masahiro Mori's theory of the uncanny valley, people's commentary reflected an aversion to highly humanlike robots. Correspondingly, the frequency of uncanny valley-related commentary was significantly higher in response to highly humanlike robots relative to those of more prototypical appearances. Independent of the robots' human similarity, we further observed a moderate correlation to exist between people's explicit fears of a “technology takeover” and their emotional responding towards robots. Finally, through the course of our investigation, we encountered a third and rather disturbing trend — namely, the unabashed sexualization of female-gendered robots. In exploring the frequency at which this sexualization manifests in the online commentary, we found it to exceed that of both the uncanny valley and fears of robot sentience/replacement combined. In sum, these findings help to shed light on the relevance of the uncanny valley “in the wild” and further, they help situate it with respect to other design challenges for HRI.","PeriodicalId":134777,"journal":{"name":"2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131787834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 44
Where are the robots? In-feed embedded techniques for visualizing robot team member locations 机器人在哪里?用于可视化机器人团队成员位置的馈送嵌入式技术
S. Seo, J. Young, Pourang Irani
{"title":"Where are the robots? In-feed embedded techniques for visualizing robot team member locations","authors":"S. Seo, J. Young, Pourang Irani","doi":"10.1109/ROMAN.2017.8172352","DOIUrl":"https://doi.org/10.1109/ROMAN.2017.8172352","url":null,"abstract":"We present a set of mini-map alternatives for indicating the relative locations of robot team members in a tele-operation interface, and evaluation results showing that these can perform as well as mini-maps while being less intrusive. Teleoperation operators often work with a team of robots to improve task effectiveness. Maintaining awareness of where robot team members are, relative to oneself, is important for team effectiveness, such as for deciding which robot may help with a task, may be best suited to investigate a point of interest, or to determine where one should move next. We explore the use of established interface techniques from mobile computing for supporting teleoperators in maintaining peripheral awareness of robot team members' relative locations. We evaluate the nontrivial adoption of these techniques to teleoperation, comparing to an overview mini-map base case. Our results indicate that in-feed embedded indicators perform comparatively well to mini-maps, while being less obtrusive, indicating that they are a viable alternative for teleoperation interfaces.","PeriodicalId":134777,"journal":{"name":"2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127638136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Enriching robot's actions with affective movements 用情感动作丰富机器人的动作
Julian M. Angel Fernandez, Andrea Bonarini
{"title":"Enriching robot's actions with affective movements","authors":"Julian M. Angel Fernandez, Andrea Bonarini","doi":"10.1109/ROMAN.2017.8172337","DOIUrl":"https://doi.org/10.1109/ROMAN.2017.8172337","url":null,"abstract":"Emotions are considered by many researchers as beneficial elements in social robotics, since they can enrich human-robot interaction. Although there have been works that have studied emotion expression in robots, mechanisms to express emotion are usually highly integrated with the rest of the system. This limits the possibility to use these approaches in other applications. This paper presents a system that has been initially created to facilitate the study of emotion projection, but it has been designed to enable its adaptation in other fields. The emotional enrichment system has been envisioned to be used with any action decision system. A description of the system components and their characteristics are provided. The system has been adapted to two different platforms with different degrees of freedom: Keepon and Triskarino.","PeriodicalId":134777,"journal":{"name":"2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121414429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A cloud-based scene recognition framework for in-home assistive robots 一种基于云的家庭辅助机器人场景识别框架
Roberto Menicatti, A. Sgorbissa
{"title":"A cloud-based scene recognition framework for in-home assistive robots","authors":"Roberto Menicatti, A. Sgorbissa","doi":"10.1109/ROMAN.2017.8172472","DOIUrl":"https://doi.org/10.1109/ROMAN.2017.8172472","url":null,"abstract":"The rapidly increasing number of elderly people has led to the development of in-home assistive robots for assisting and monitoring elderly people in their daily life. To these ends, indoor scene and human activity recognition is fundamental. However, image processing is an expensive process, in computational, energy, storage and pricing terms, which can be problematic for consumer robots. For this reason, we propose the use of computer vision cloud services and a Naive Bayes model to perform indoor scene and human daily activity recognition. We implement the developed method on the telepresence robot Double to make it autonomously find and approach the person in the environment as well as detect the performed activity.","PeriodicalId":134777,"journal":{"name":"2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128602152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Keep on dancing: Effects of expressive motion mimicry 继续跳舞:表达动作模仿的效果
R. Simmons, H. Knight
{"title":"Keep on dancing: Effects of expressive motion mimicry","authors":"R. Simmons, H. Knight","doi":"10.1109/ROMAN.2017.8172382","DOIUrl":"https://doi.org/10.1109/ROMAN.2017.8172382","url":null,"abstract":"Expressive motion refers to movements that help convey an agent's attitude towards its task or environment. People frequently use expressive motion to indicate internal states such as emotion, confidence, and engagement. Robots can also exhibit expressive motion, and studies have shown that people can legibly interpret such expressive motion. Mimicry involves imitating the behaviors of others, and has been shown to increase rapport between people. The research question addressed in this study is how robots mimicking the expressive motion of children affects their interaction with dancing robots. The paper presents our approach to generating and characterizing expressive motion, based on the Laban Efforts System and the results of the study, which provides both significant and suggestive evidence to support that such mimicry has positive effects on the children's behaviors.","PeriodicalId":134777,"journal":{"name":"2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114270236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Generating 3D fundamental map by large-scale SLAM and graph-based optimization focused on road center line 以道路中心线为中心,采用大规模SLAM和基于图的优化方法生成三维基础地图
Shun Niijima, Jirou Nitta, Y. Sasaki, H. Mizoguchi
{"title":"Generating 3D fundamental map by large-scale SLAM and graph-based optimization focused on road center line","authors":"Shun Niijima, Jirou Nitta, Y. Sasaki, H. Mizoguchi","doi":"10.1109/ROMAN.2017.8172455","DOIUrl":"https://doi.org/10.1109/ROMAN.2017.8172455","url":null,"abstract":"The paper presents a method to generate a large-scale 3D fundamental map from a running vehicle. To create an easy-to-use approach for frequent updates, we propose a system to utilize simultaneous localization and mapping (SLAM), which is robot mapping technology. In traditional methods, special machines or many manual operations cause higher mapping costs. The existing mobile mapping method (MMS) requires manual anchoring point measurement for ensuring accuracy. To solve this problem, we propose a 3D map optimization method by using road information from the standard map issued by the Geospatial Information Authority of Japan. From the SLAM result, the road center line of 3D shape map is estimated by assuming the car is running on road. Pose graph optimization between the estimated road center line and that of the standard map corrects cumulative distortion of the SLAM result. The experimental results of on-vehicle 3D LIDAR observation show that the proposed system could correct the cumulative distortion of the SLAM results and automatically generate a large-scale 3D map assuring reference accuracy.","PeriodicalId":134777,"journal":{"name":"2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"305 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114275214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Hand motion recognition using a distance sensor array 使用距离传感器阵列的手部动作识别
Sung-gwi Cho, M. Yoshikawa, Ming Ding, J. Takamatsu, T. Ogasawara
{"title":"Hand motion recognition using a distance sensor array","authors":"Sung-gwi Cho, M. Yoshikawa, Ming Ding, J. Takamatsu, T. Ogasawara","doi":"10.1109/ROMAN.2017.8172496","DOIUrl":"https://doi.org/10.1109/ROMAN.2017.8172496","url":null,"abstract":"Many studies of hand motion recognition using a surface electromyogram (sEMG) have been conducted. However, it is difficult to get the activity of deep layer muscles from an sEMG. The pronation and supination of the forearm are caused by the activities of deep layer muscles. These motions are important in grasping and manipulating daily objects. We think it is possible to accurately recognize hand motions from the activity of the deep layer muscles using the forearm deformation. Forearm deformation is caused by a complex motion of the surface and deep layer muscles, tendons, and bones. In this study, we propose a novel hand motion recognition method based on measuring forearm deformation with a distance sensor array. The distance sensor array is designed based on a 3D model of the forearm. It can measure small deformations because the shape of the array is designed to fit the neutral position of the forearm. A Support Vector Machine (SVM) is used to recognize seven types of hand motion. Two types of features are extracted for the recognition based on the time difference of the forearm deformation. Using the proposed method, we perform hand motion recognition experiments. The experimental results showed that the proposed method correctly recognized hand motions caused by the activity of both surface and deep layer muscles, including the pronation and supination of the forearm. Moreover, the hand opening of small deformation motions was correctly recognized.","PeriodicalId":134777,"journal":{"name":"2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"159 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114567376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信