The 23rd IEEE International Symposium on Robot and Human Interactive Communication最新文献

筛选
英文 中文
AOA: Ambient obstacle avoidance interface AOA:环境避障界面
The 23rd IEEE International Symposium on Robot and Human Interactive Communication Pub Date : 2014-10-20 DOI: 10.1109/ROMAN.2014.6926224
Wei Liang Kenny Chua, Matt Johnson, T. Eskridge, Brenan Keller
{"title":"AOA: Ambient obstacle avoidance interface","authors":"Wei Liang Kenny Chua, Matt Johnson, T. Eskridge, Brenan Keller","doi":"10.1109/ROMAN.2014.6926224","DOIUrl":"https://doi.org/10.1109/ROMAN.2014.6926224","url":null,"abstract":"In this paper, we present a novel interface for teleoperating ground vehicles. Obstacle avoidance with ground vehicles demands a high level of operator attention, typically distracting from the primary mission. The Ambient Obstacle Avoidance (AOA) was designed to allow operators to effectively perform a primary task, such as search, while still effectively avoiding obstacles. The AOA wraps around a standard video interface and provides range information without requiring a separate screen. AOA combines and reduces different data streams into proportionately scaled symbology that directly shows important relationships between vehicle width, vehicle orientation and obstacles in the environment. The use of the AOA interface was tested in both simulation and on physical robots. Results from both tests show an improvement in obstacle avoidance during navigation with the AOA. In addition, results from the simulation test indicate that operators using AOA were able to leverage ambient vision such that the primary visual task was not impeded.","PeriodicalId":235810,"journal":{"name":"The 23rd IEEE International Symposium on Robot and Human Interactive Communication","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128637111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Real-time validation of a novel two-wheeled robot with a dynamically moving payload 一种新型动态移动载荷两轮机器人的实时验证
The 23rd IEEE International Symposium on Robot and Human Interactive Communication Pub Date : 2014-10-20 DOI: 10.1109/ROMAN.2014.6926237
O. Sayidmarie, M. Tokhi, S. A. Agouri
{"title":"Real-time validation of a novel two-wheeled robot with a dynamically moving payload","authors":"O. Sayidmarie, M. Tokhi, S. A. Agouri","doi":"10.1109/ROMAN.2014.6926237","DOIUrl":"https://doi.org/10.1109/ROMAN.2014.6926237","url":null,"abstract":"In real life applications, two-wheeled robots are considered to carry payloads of different sizes and at different positions or motion speeds the vertical axis. Studying the impact of those parameters should be considered in detail in order to account for their impact on the robot stability and the control mechanism. This paper investigates the impact of changing the payload position dynamically; on the system damping characteristics while the robot in its balancing state.","PeriodicalId":235810,"journal":{"name":"The 23rd IEEE International Symposium on Robot and Human Interactive Communication","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128849266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Classification of reaching and gripping gestures for safety on walking aids 助行器安全伸手和抓握姿势的分类
The 23rd IEEE International Symposium on Robot and Human Interactive Communication Pub Date : 2014-10-20 DOI: 10.1109/ROMAN.2014.6926344
J. Paulo, P. Peixoto
{"title":"Classification of reaching and gripping gestures for safety on walking aids","authors":"J. Paulo, P. Peixoto","doi":"10.1109/ROMAN.2014.6926344","DOIUrl":"https://doi.org/10.1109/ROMAN.2014.6926344","url":null,"abstract":"This paper proposes a solution to increase safety when using robotic walking aids. Walking aids are an important tool as long as they can be safely operated, but elderly people often discard them due to the fear of falling. The authors propose a safety system that locks the walker and giving the appropriate feedback when the user is inadequately gripping the walker's grips. It is based on a low-cost optical hand tracker that allows the system to perceive how safely and efficiently a user reaches and grips the handles of the walking aid. This is an important requirement, especially on those scenarios where the walker operation requires bodyweight support on the upper limbs. Adequate interaction with the walker reduces the risk of falling and gives some extra confidence to the user. Experimental results with 10 volunteers provided strong evidence that the proposed system is able to distinguish between correct and incorrect grips. The proposed experimental setup can be easily integrated in different contexts, making it unconstrained to the physical setup.","PeriodicalId":235810,"journal":{"name":"The 23rd IEEE International Symposium on Robot and Human Interactive Communication","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114168760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Exploring inter- and intra-speaker variability in multi-modal task descriptions 探索多模态任务描述中说话人之间和说话人内部的差异
The 23rd IEEE International Symposium on Robot and Human Interactive Communication Pub Date : 2014-10-20 DOI: 10.1109/ROMAN.2014.6926228
Stephanie Schreitter, Brigitte Krenn
{"title":"Exploring inter- and intra-speaker variability in multi-modal task descriptions","authors":"Stephanie Schreitter, Brigitte Krenn","doi":"10.1109/ROMAN.2014.6926228","DOIUrl":"https://doi.org/10.1109/ROMAN.2014.6926228","url":null,"abstract":"In natural human-human task descriptions, the verbal and the non-verbal parts of communication together comprise the information necessary for understanding. When robots are to learn tasks from humans in the future, the detection and integrated interpretation of both of these cues is decisive. In the present paper, we present a qualitative study on essential verbal and non-verbal cues by means of which information is transmitted during explaining and showing a task to a learner. In order to collect a respective data set for further investigation, 16 (human) teachers explained to a human learner how to mount a tube in a box with holdings, and six teachers did this to a robot learner. Detailed multi-modal analysis revealed that in both conditions, information was more reliable when transmitted via verbal and gestural references to the visual scene and via eye gaze than via the actual wording. In particular, intra-speaker variability in wording and perspective taking by the teacher potentially hinders understanding of the learner. The results presented in this paper emphasize the importance of investigating the inherently multi-modal nature of how humans structure and transmit information in order to derive respective computational models for robot learners.","PeriodicalId":235810,"journal":{"name":"The 23rd IEEE International Symposium on Robot and Human Interactive Communication","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114541656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
A shared control architecture for human-in-the-loop robotics applications 人在环机器人应用的共享控制体系结构
The 23rd IEEE International Symposium on Robot and Human Interactive Communication Pub Date : 2014-10-20 DOI: 10.1109/ROMAN.2014.6926397
Velin D. Dimitrov, T. Padır
{"title":"A shared control architecture for human-in-the-loop robotics applications","authors":"Velin D. Dimitrov, T. Padır","doi":"10.1109/ROMAN.2014.6926397","DOIUrl":"https://doi.org/10.1109/ROMAN.2014.6926397","url":null,"abstract":"We propose a shared control architecture to enable the modeling of human-in-the-loop cyber physical systems (HiLCPS) in robotics applications. We identify challenges that currently hinder ideas and concepts from cross-domain applications to be shared among different implementation of HiLCPS. The presented architecture is developed with the intent to help bridge the gap between different communities developing HiLCPS by providing a common framework, associated metrics, and associated language to describe individual elements. We provide examples from two different domains, disaster robotics and assistive robotics, to demonstrate the structure of the architecture.","PeriodicalId":235810,"journal":{"name":"The 23rd IEEE International Symposium on Robot and Human Interactive Communication","volume":"126 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128028252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Multimodal biometric identification system for mobile robots combining human metrology to face recognition and speaker identification 结合人体计量学、人脸识别和说话人识别的移动机器人多模态生物识别系统
The 23rd IEEE International Symposium on Robot and Human Interactive Communication Pub Date : 2014-10-20 DOI: 10.1109/ROMAN.2014.6926273
Simon Ouellet, François Grondin, Francis Leconte, F. Michaud
{"title":"Multimodal biometric identification system for mobile robots combining human metrology to face recognition and speaker identification","authors":"Simon Ouellet, François Grondin, Francis Leconte, F. Michaud","doi":"10.1109/ROMAN.2014.6926273","DOIUrl":"https://doi.org/10.1109/ROMAN.2014.6926273","url":null,"abstract":"Recognizing a person from a distance is important to establish meaningful social interaction and to provide additional cues regarding the situations experienced by a robot. To do so, face recognition and speaker identification are biometrics commonly used, with identification performance that are influenced by the distance between the person and the robot. This paper presents a system that combines these biometrics with human metrology (HM) to increase identification performance and range. HM measures are derived from 2D silhouettes extracted online using a dynamic background subtraction approach, processing in parallel 45 front features and 24 side features in 400 ms compared to 38 front and 22 side features extracted in sequence in 30 sec by using the approach presented by Lin and Wang [1]. By having each modality identify a set of up to five possible candidates, results suggest that combining modalities provide better performance compared to what each individual modality provides, from a wider range of distances.","PeriodicalId":235810,"journal":{"name":"The 23rd IEEE International Symposium on Robot and Human Interactive Communication","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127279207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Robot Navigation in dynamic environment for an indoor human monitoring 机器人导航在动态环境下的室内人类监控
The 23rd IEEE International Symposium on Robot and Human Interactive Communication Pub Date : 2014-10-20 DOI: 10.1109/ROMAN.2014.6926334
S. Iizuka, Takahiko Nakamura, Satoshi Suzuki
{"title":"Robot Navigation in dynamic environment for an indoor human monitoring","authors":"S. Iizuka, Takahiko Nakamura, Satoshi Suzuki","doi":"10.1109/ROMAN.2014.6926334","DOIUrl":"https://doi.org/10.1109/ROMAN.2014.6926334","url":null,"abstract":"This paper proposes a technique for navigation of a monitoring robot to watch over persons in a dynamic environment. In order to make an environmental map around the robot, localization of the robot and path finding to the target position are required so that the robot can move autonomously in the dynamic environment. In this study the position of robot and the environmental map are obtained by using Simultaneous Localization and Mapping (SLAM) which makes the map by using LRF with characteristic markers. Path finding to the target position is executed by using Navigation function that is a type of Artificial Potential Field (APF) method. Navigation function projects the work space and the obstacles to a topology space, and calculates value of APF. An effectiveness of the presented algorithm was confirmed since the point mass robot can reach the target position by avoiding moving objects through the simulation.","PeriodicalId":235810,"journal":{"name":"The 23rd IEEE International Symposium on Robot and Human Interactive Communication","volume":"2016 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127490899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Bayesian approach for task recognition and future human activity prediction 任务识别和未来人类活动预测的贝叶斯方法
The 23rd IEEE International Symposium on Robot and Human Interactive Communication Pub Date : 2014-10-20 DOI: 10.1109/ROMAN.2014.6926339
Vito Magnanimo, Matteo Saveriano, Silvia Rossi, Dongheui Lee
{"title":"A Bayesian approach for task recognition and future human activity prediction","authors":"Vito Magnanimo, Matteo Saveriano, Silvia Rossi, Dongheui Lee","doi":"10.1109/ROMAN.2014.6926339","DOIUrl":"https://doi.org/10.1109/ROMAN.2014.6926339","url":null,"abstract":"Task recognition and future human activity prediction are of importance for a safe and profitable human-robot cooperation. In real scenarios, the robot has to extract this information merging the knowledge of the task with contextual information from the sensors, minimizing possible misunderstandings. In this paper, we focus on tasks that can be represented as a sequence of manipulated objects and performed actions. The task is modelled with a Dynamic Bayesian Network (DBN), which takes as input manipulated objects and performed actions. Objects and actions are separately classified starting from RGB-D raw data. The DBN is responsible for estimating the current task, predicting the most probable future pairs of action-object and correcting possible misclassification. The effectiveness of the proposed approach is validated on a case of study, consisting of three typical tasks of a kitchen scenario.","PeriodicalId":235810,"journal":{"name":"The 23rd IEEE International Symposium on Robot and Human Interactive Communication","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122044346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
How do verbal/bodily fillers ease embarrassing situations during silences in conversations? 在沉默的谈话中,语言或肢体语言是如何缓解尴尬的?
The 23rd IEEE International Symposium on Robot and Human Interactive Communication Pub Date : 2014-10-20 DOI: 10.1109/ROMAN.2014.6926226
N. Mukawa, Hiroki Sasaki, A. Kimura
{"title":"How do verbal/bodily fillers ease embarrassing situations during silences in conversations?","authors":"N. Mukawa, Hiroki Sasaki, A. Kimura","doi":"10.1109/ROMAN.2014.6926226","DOIUrl":"https://doi.org/10.1109/ROMAN.2014.6926226","url":null,"abstract":"In this study we analyzed the roles of verbal/bodily fillers for recovering from awkward silences in conversations. We focused on verbal fillers such as “ummm” and “uh,” and bodily fillers like “touching own hair or chin” that commonly emerge during silences between turns in conversations. We designed and created simulated dyadic-conversation scenarios using computer graphics characters, and then performed evaluations utilizing stimuli drawn from these simulations. Subjective evaluation results suggested that fillers express participants' sincerity in maintaining conversations and they can be used as clues for other participants to begin their utterances. These findings have practical implications for the behavioral design of conversational robots that can behave more appropriately and politely with humans.","PeriodicalId":235810,"journal":{"name":"The 23rd IEEE International Symposium on Robot and Human Interactive Communication","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123544725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Improving attitudes towards social robots using imagined contact 通过想象接触改善人们对社交机器人的态度
The 23rd IEEE International Symposium on Robot and Human Interactive Communication Pub Date : 2014-10-20 DOI: 10.1109/ROMAN.2014.6926300
Ricarda Wullenkord, F. Eyssel
{"title":"Improving attitudes towards social robots using imagined contact","authors":"Ricarda Wullenkord, F. Eyssel","doi":"10.1109/ROMAN.2014.6926300","DOIUrl":"https://doi.org/10.1109/ROMAN.2014.6926300","url":null,"abstract":"Negative attitudes towards robots are common and pose an obstacle for successful and pleasant human-robot interaction. To reduce negative attitudes in the context of social robotics, we draw upon social psychological methods of attitude change. Imagined contact represents one such paradigm which to date has mostly been tested and validated in the human-human intergroup context. In the present experiment, we have therefore examined the effectiveness of imagined contact (i.e., the mere mental simulation of contact with a human or nonhuman target) in changing robot-related attitudes, robot anxiety, contact intentions, and psychological anthropomorphism. To do so, participants had to briefly imagine a restaurant scenario as detailed and lively as possible. Crucially, we manipulated the content of the imagined contact scenario as follows: In the control conditions, participants imagined an interaction with a human target or a technical device. In the experimental condition, in contrast, participants imagined an interaction with a robot target. We predicted that imagined contact with a robot (versus a human target versus a technical device) would result in more positive attitudes towards the robot, stronger contact intentions, and higher psychological anthropomorphism whereas robot anxiety should decrease. Contrary to our predictions, however, we found that participants who had imagined contact with a human target reported more positive attitudes and higher contact intentions with a robot prototype than participants who had imagined contact with a technical device. Furthermore, imagined contact with a robot had no effect on the dependent measures. We interpret these findings in light of potential ceiling effects, fluency effects, and the activation of elicited agent knowledge.","PeriodicalId":235810,"journal":{"name":"The 23rd IEEE International Symposium on Robot and Human Interactive Communication","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130488231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信