The 23rd IEEE International Symposium on Robot and Human Interactive Communication最新文献

筛选
英文 中文
Enhancing the robot avateering metaphor discreetly with an assistive agent and its effect on perception 用辅助智能体谨慎增强机器人回避隐喻及其对感知的影响
The 23rd IEEE International Symposium on Robot and Human Interactive Communication Pub Date : 2014-10-20 DOI: 10.1109/ROMAN.2014.6926398
S. Koh, Kevin P. Pfeil, J. Laviola
{"title":"Enhancing the robot avateering metaphor discreetly with an assistive agent and its effect on perception","authors":"S. Koh, Kevin P. Pfeil, J. Laviola","doi":"10.1109/ROMAN.2014.6926398","DOIUrl":"https://doi.org/10.1109/ROMAN.2014.6926398","url":null,"abstract":"We present a modeling approach to develop an agent that assists users discreetly in teleoperation when avateering a robot via an inexpensive motion sensor. Avateering is a powerful metaphor, and can be an effective teleoperating strategy. Avateering a Humanoid Robot (HR) with no wearable device encumberment, such as using the popular Kinect/NUI motion sensor, is also desirable and very promising. However, this control scheme makes it difficult for the slave robot to make contact and interact with objects with high accuracy due to factors such as viewpoint, individually-unique and unilateral human-side control imprecision, and lack of informative tactile feedback. Our research explores the addition of an assistive agent that arbitrates user input without disrupting the overall experience and expectation. Additionally, our agent assists with maintaining a higher level of accuracy for interaction tasks, in our case, a grasping and lifting scenario. Using theWebots robot simulator, we implemented 4 assistive agents to augment the user in avateering the Darwin-OP robot. The agent iterations are described, and results of a user study are presented. We discuss user perception towards the avateering metaphor when enhanced by the agent and also when unassisted, including perceived easiness of the task, responsiveness of the robot, and accuracy.","PeriodicalId":235810,"journal":{"name":"The 23rd IEEE International Symposium on Robot and Human Interactive Communication","volume":"25 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131923585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Mechanisms and capabilities for human robot collaboration 人机协作的机制和能力
The 23rd IEEE International Symposium on Robot and Human Interactive Communication Pub Date : 2014-10-20 DOI: 10.1109/ROMAN.2014.6926329
Claus Lenz, A. Knoll
{"title":"Mechanisms and capabilities for human robot collaboration","authors":"Claus Lenz, A. Knoll","doi":"10.1109/ROMAN.2014.6926329","DOIUrl":"https://doi.org/10.1109/ROMAN.2014.6926329","url":null,"abstract":"This paper deals with the concept of a collaborative human-robot workspace in production environments recapitulating and complementing the work of the author presented in [1]. Different aspects regarding collaboration are discussed and applied in an exemplary scenario. Modalities including visualizations and audio are used to inform the human worker about next assembly steps and the current status of the system. The robot supplies the human worker with needed parts in an adaptive manner to prevent errors and to increase ergonomic benefits. Further, the human worker can intuitively interact and adjust the robot using projected menus on the worktable and by force-guidance of the robot. All these functions are brought together in an overall architecture.","PeriodicalId":235810,"journal":{"name":"The 23rd IEEE International Symposium on Robot and Human Interactive Communication","volume":"255 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132538394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Particle filter based lower limb prediction and motion control for JAIST Active Robotic Walker 基于粒子滤波的JAIST主动行走机器人下肢预测与运动控制
The 23rd IEEE International Symposium on Robot and Human Interactive Communication Pub Date : 2014-10-20 DOI: 10.1109/ROMAN.2014.6926222
Takanori Ohnuma, Geunho Lee, N. Chong
{"title":"Particle filter based lower limb prediction and motion control for JAIST Active Robotic Walker","authors":"Takanori Ohnuma, Geunho Lee, N. Chong","doi":"10.1109/ROMAN.2014.6926222","DOIUrl":"https://doi.org/10.1109/ROMAN.2014.6926222","url":null,"abstract":"This paper presents an interactive control for our assistive robotic walker, the JAIST Active Robotic Walker (JARoW), developed for elderly people in need of walking assistance. The focus of our paper is placed on how to estimate the user's walking parameters by sensing the locations of lower limbs and to predict his or her walking patterns. For this purpose, a particle-filter-based prediction technique and a motion controller are developed to help JARoW smoothly generate the direction and velocity of its movements in a way that reflects the prediction. The proposed scheme and its implementation are described in detail, and outdoor experiments are performed to demonstrate its effectiveness and feasibility in everyday environments.","PeriodicalId":235810,"journal":{"name":"The 23rd IEEE International Symposium on Robot and Human Interactive Communication","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132546916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Teachers' views on the use of empathic robotic tutors in the classroom 教师对在课堂上使用移情机器人导师的看法
The 23rd IEEE International Symposium on Robot and Human Interactive Communication Pub Date : 2014-10-20 DOI: 10.1109/ROMAN.2014.6926376
Sofia Serholt, W. Barendregt, Iolanda Leite, H. Hastie, A. Jones, Ana Paiva, A. Vasalou, Ginevra Castellano
{"title":"Teachers' views on the use of empathic robotic tutors in the classroom","authors":"Sofia Serholt, W. Barendregt, Iolanda Leite, H. Hastie, A. Jones, Ana Paiva, A. Vasalou, Ginevra Castellano","doi":"10.1109/ROMAN.2014.6926376","DOIUrl":"https://doi.org/10.1109/ROMAN.2014.6926376","url":null,"abstract":"In this paper, we describe the results of an interview study conducted across several European countries on teachers' views on the use of empathic robotic tutors in the classroom. The main goals of the study were to elicit teachers' thoughts on the integration of the robotic tutors in the daily school practice, understanding the main roles that these robots could play and gather teachers' main concerns about this type of technology. Teachers' concerns were much related to the fairness of access to the technology, robustness of the robot in students' hands and disruption of other classroom activities. They saw a role for the tutor in acting as an engaging tool for all, preferably in groups, and gathering information about students' learning progress without taking over the teachers' responsibility for the actual assessment. The implications of these results are discussed in relation to teacher acceptance of ubiquitous technologies in general and robots in particular.","PeriodicalId":235810,"journal":{"name":"The 23rd IEEE International Symposium on Robot and Human Interactive Communication","volume":"116 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132274084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 56
Learning something from nothing: Leveraging implicit human feedback strategies 从无到有:利用隐含的人类反馈策略
The 23rd IEEE International Symposium on Robot and Human Interactive Communication Pub Date : 2014-10-20 DOI: 10.1109/ROMAN.2014.6926319
R. Loftin, Bei Peng, J. MacGlashan, M. Littman, Matthew E. Taylor, Jeff Huang, D. Roberts
{"title":"Learning something from nothing: Leveraging implicit human feedback strategies","authors":"R. Loftin, Bei Peng, J. MacGlashan, M. Littman, Matthew E. Taylor, Jeff Huang, D. Roberts","doi":"10.1109/ROMAN.2014.6926319","DOIUrl":"https://doi.org/10.1109/ROMAN.2014.6926319","url":null,"abstract":"In order to be useful in real-world situations, it is critical to allow non-technical users to train robots. Existing work has considered the problem of a robot or virtual agent learning behaviors from evaluative feedback provided by a human trainer. That work, however, has treated feedback as a numeric reward that the agent seeks to maximize, and has assumed that all trainers will provide feedback in the same way when teaching the same behavior. We report the results of a series of user studies that indicate human trainers use a variety of approaches to providing feedback in practice, which we describe as different “training strategies.” For example, users may not always give explicit feedback in response to an action, and may be more likely to provide explicit reward than explicit punishment, or vice versa. If the trainer is consistent in their strategy, then it may be possible to infer knowledge about the desired behavior from cases where no explicit feedback is provided. We discuss a probabilistic model of human-provided feedback that can be used to classify these different training strategies based on when the trainer chooses to provide explicit reward and/or explicit punishment, and when they choose to provide no feedback. Additionally, we investigate how training strategies may change in response to the appearance of the learning agent. Ultimately, based on this work, we argue that learning agents designed to understand and adapt to different users' training strategies will allow more efficient and intuitive learning experiences.","PeriodicalId":235810,"journal":{"name":"The 23rd IEEE International Symposium on Robot and Human Interactive Communication","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131081555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
HandSOM - neural clustering of hand motion for gesture recognition in real time HandSOM -用于实时手势识别的手部运动神经聚类
The 23rd IEEE International Symposium on Robot and Human Interactive Communication Pub Date : 2014-10-20 DOI: 10.1109/ROMAN.2014.6926380
G. I. Parisi, Doreen Jirak, S. Wermter
{"title":"HandSOM - neural clustering of hand motion for gesture recognition in real time","authors":"G. I. Parisi, Doreen Jirak, S. Wermter","doi":"10.1109/ROMAN.2014.6926380","DOIUrl":"https://doi.org/10.1109/ROMAN.2014.6926380","url":null,"abstract":"Gesture recognition is an important task in Human-Robot Interaction (HRI) and the research effort towards robust and high-performance recognition algorithms is increasing. In this work, we present a neural network approach for learning an arbitrary number of labeled training gestures to be recognized in real time. The representation of gestures is hand-independent and gestures with both hands are also considered. We use depth information to extract salient motion features and encode gestures as sequences of motion patterns. Preprocessed sequences are then clustered by a hierarchical learning architecture based on self-organizing maps. We present experimental results on two different data sets: command-like gestures for HRI scenarios and communicative gestures that include cultural peculiarities, often excluded in gesture recognition research. For better recognition rates, noisy observations introduced by tracking errors are detected and removed from the training sets. Obtained results motivate further investigation of efficient neural network methodologies for gesture-based communication.","PeriodicalId":235810,"journal":{"name":"The 23rd IEEE International Symposium on Robot and Human Interactive Communication","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123806681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Real time modeling of the cognitive load of an Urban Search And Rescue robot operator 城市搜救机器人操作员认知负荷的实时建模
The 23rd IEEE International Symposium on Robot and Human Interactive Communication Pub Date : 2014-10-20 DOI: 10.1109/ROMAN.2014.6926363
T. R. Colin, N. Smets, T. Mioch, Mark Antonius Neerincx
{"title":"Real time modeling of the cognitive load of an Urban Search And Rescue robot operator","authors":"T. R. Colin, N. Smets, T. Mioch, Mark Antonius Neerincx","doi":"10.1109/ROMAN.2014.6926363","DOIUrl":"https://doi.org/10.1109/ROMAN.2014.6926363","url":null,"abstract":"Urban Search And Rescue (USAR) robots are used to find and save victims in the wake of disasters such as earthquakes or terrorist attacks. The operators of these robots are affected by high cognitive load; this hinders effective robot usage. This paper presents a cognitive task load model for real-time monitoring and, subsequently, balancing of workload on three factors that affect operator performance and mental effort: time occupied, level of information processing, and number of task switches. To test an implementation of the model, five participants drove a shape-shifting USAR robot, accumulating over 16 hours of driving time in the course of 485 USAR missions with varying objectives and difficulty. An accuracy of 69% was obtained for discrimination between low and high cognitive load; higher accuracy was measured for discrimination between extreme cognitive loads. This demonstrates that such a model can contribute, in a non-invasive manner, to estimating an operator's cognitive state. Several ways to further improve accuracy are discussed, based on additional experimental results.","PeriodicalId":235810,"journal":{"name":"The 23rd IEEE International Symposium on Robot and Human Interactive Communication","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124792311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
What a robotic companion could do for a diabetic child 一个机器人伴侣能为糖尿病儿童做些什么啊
The 23rd IEEE International Symposium on Robot and Human Interactive Communication Pub Date : 2014-10-20 DOI: 10.1109/ROMAN.2014.6926373
I. Baroni, M. Nalin, Paul E. Baxter, C. Pozzi, E. Oleari, A. Sanna, Tony Belpaeme
{"title":"What a robotic companion could do for a diabetic child","authors":"I. Baroni, M. Nalin, Paul E. Baxter, C. Pozzi, E. Oleari, A. Sanna, Tony Belpaeme","doi":"10.1109/ROMAN.2014.6926373","DOIUrl":"https://doi.org/10.1109/ROMAN.2014.6926373","url":null,"abstract":"Being a child with diabetes is challenging: apart from the emotional difficulties of dealing with the disease, there are multiple physical aspects that need to be dealt with on a daily basis. Furthermore, as the children grow older, it becomes necessary to self-manage their condition without the explicit supervision of parents or carers. This process requires that the children overcome a steep learning curve. Previous work hypothesized that a robot could provide a supporting role in this process. In this paper, we characterise this potential support in greater detail through a structured collection of perspectives from all stakeholders, namely the diabetic children, their siblings and parents, and the healthcare professionals involved in their diabetes education and care. A series of brain-storming sessions were conducted with 22 families with a diabetic child (32 children and 38 adults in total) to explore areas in which they expected that a robot could provide support and/or assistance. These perspectives were then reviewed, validated and extended by healthcare professionals to provide a medical grounding. The results of these analyses suggested a number of specific functions that a companion robot could fulfil to support diabetic children in their daily lives.","PeriodicalId":235810,"journal":{"name":"The 23rd IEEE International Symposium on Robot and Human Interactive Communication","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128310872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
Development of a wearable system for navigating the visually impaired in the indoor environment - a prototype system for fork detection and navigation - 开发用于在室内环境中为视障人士导航的可穿戴系统——用于叉子检测和导航的原型系统
The 23rd IEEE International Symposium on Robot and Human Interactive Communication Pub Date : 2014-10-20 DOI: 10.1109/ROMAN.2014.6926310
M. Sekiguchi, Koichi Ishiwata, Masataka Fuchida, Akio Nakamura
{"title":"Development of a wearable system for navigating the visually impaired in the indoor environment - a prototype system for fork detection and navigation -","authors":"M. Sekiguchi, Koichi Ishiwata, Masataka Fuchida, Akio Nakamura","doi":"10.1109/ROMAN.2014.6926310","DOIUrl":"https://doi.org/10.1109/ROMAN.2014.6926310","url":null,"abstract":"We propose a wearable guide system for supporting the visually impaired. A wearable device equipped with a small laser range sensor and a gyroscope as sensors is developed. The sensors are attached to the user's chest. The range sensor is utilized to obtain distance information to the wall in the horizontal cross-sectional plane in front of the user. The gyroscope is adopted to estimate user's direction utilized in the fork pattern classification. The system classifies passage forks of the indoor environment into 7 patterns; left turn, straight, right turn, T-junction (left), T-junction (dead end), T-junction (right), crossroads. Based on the fork classification and prepared environmental topological map, the system instructs the user appropriate direction by voice at the fork. Experimental trials show the basic validity of the proposed system. Among 10 eye-masked subjects who join the experiment, the only two persons can reach the destination without the voice announcement. On the other hand, 8 of 10 subjects who wear the proposed wearable device can reach the destination.","PeriodicalId":235810,"journal":{"name":"The 23rd IEEE International Symposium on Robot and Human Interactive Communication","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125794458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Social distance augmented qualitative trajectory calculus for Human-Robot Spatial Interaction 人-机器人空间交互的社会距离增强定性轨迹演算
The 23rd IEEE International Symposium on Robot and Human Interactive Communication Pub Date : 2014-10-20 DOI: 10.1109/ROMAN.2014.6926305
C. Dondrup, N. Bellotto, Marc Hanheide
{"title":"Social distance augmented qualitative trajectory calculus for Human-Robot Spatial Interaction","authors":"C. Dondrup, N. Bellotto, Marc Hanheide","doi":"10.1109/ROMAN.2014.6926305","DOIUrl":"https://doi.org/10.1109/ROMAN.2014.6926305","url":null,"abstract":"In this paper we propose to augment a wellestablished Qualitative Trajectory Calculus (QTC) by incorporating social distances into the model to facilitate a richer and more powerful representation of Human-Robot Spatial Interaction (HRSI). By combining two variants of QTC that implement different resolutions and switching between them based on distance thresholds we show that we are able to both reduce the complexity of the representation and at the same time enrich QTC with one of the core HRSI concepts: proxemics. Building on this novel integrated QTC model, we propose to represent the joint spatial behaviour of a human and a robot employing a probabilistic representation based on Hidden Markov Models. We show the appropriateness of our approach by encoding different HRSI behaviours observed in a human-robot interaction study and show how the models can be used to represent and classify these behaviours using social distance-augmented QTC.","PeriodicalId":235810,"journal":{"name":"The 23rd IEEE International Symposium on Robot and Human Interactive Communication","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126489045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信