ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication最新文献

筛选
英文 中文
Haptic rendering of sharp objects using lateral forces 利用横向力对尖锐物体进行触觉渲染
O. Portillo-Rodríguez, C. Avizzano, M. Bergamasco, Gabriel Robles-De-La-Torre
{"title":"Haptic rendering of sharp objects using lateral forces","authors":"O. Portillo-Rodríguez, C. Avizzano, M. Bergamasco, Gabriel Robles-De-La-Torre","doi":"10.1109/ROMAN.2006.314366","DOIUrl":"https://doi.org/10.1109/ROMAN.2006.314366","url":null,"abstract":"Achieving realistic rendering of thin and spatially sharp objects (needles, for example) is an important open problem in computer haptics. Intrinsic mechanical properties of users, such as limb inertia, as well as mechanical and bandwidth limitations in haptic interfaces make this a very challenging problem. A successful rendering algorithm should also provide stable contact with a haptic virtual object. Here, perceptual illusions have been used to overcome some of these limitations to render objects with perceived sharp features. The feasibility of the approach was tested using a haptics-to-vision matching task. Results suggest that lateral-force-based illusory shapes can be used to render sharp objects, while also providing stable contact during virtual object exploration","PeriodicalId":254129,"journal":{"name":"ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131254593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Investigating the relationship between the personality of a robotic TV assistant and the level of user control 调查电视机器人助手的个性与用户控制水平之间的关系
B. Meerbeek, J. Hoonhout, P. Bingley, J. Terken
{"title":"Investigating the relationship between the personality of a robotic TV assistant and the level of user control","authors":"B. Meerbeek, J. Hoonhout, P. Bingley, J. Terken","doi":"10.1109/ROMAN.2006.314362","DOIUrl":"https://doi.org/10.1109/ROMAN.2006.314362","url":null,"abstract":"This paper describes the design and evaluation of a robotic TV assistant that helps users find a TV-programme that fits their interests. Questions that were addressed include: What personality do users prefer for the robotic TV-assistant? What level of control do they prefer? How do personality and the level of control relate to each other? Four prototypes were developed by combining two personalities and two levels of user control. In the high control condition, a speech-based command-and-control interaction style was used, whereas the interaction style in the low control condition consisted of speech-based system-initiative natural language dialogue. The results demonstrated an interaction between the effects of personality and level of control on user preferences. Overall, the most preferred combination was an extravert and friendly personality with low user control. Additionally, it was found that perceived level of control was influenced by the robot's personality. This suggests that the robot's personality can be used as a means to increase the amount of control that users perceive","PeriodicalId":254129,"journal":{"name":"ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication","volume":"5 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116937067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Robotics in Education: Plastic Bottle Based Robots for Understanding Morph-Functionality 教育中的机器人:基于塑料瓶的理解形态功能的机器人
Kojiro Matsushita, H. Yokoi, T. Arai
{"title":"Robotics in Education: Plastic Bottle Based Robots for Understanding Morph-Functionality","authors":"Kojiro Matsushita, H. Yokoi, T. Arai","doi":"10.1109/ROMAN.2006.314476","DOIUrl":"https://doi.org/10.1109/ROMAN.2006.314476","url":null,"abstract":"In this paper, we introduce our robot package for educational use. The main characteristics are the followings: robots are built by connecting plastic bottles and RC servo motors with glues so that technical skills such as machining are not required for students; three types of robot controllers, such as manual controller, autonomous controller, bio-signal interface controller, are provided so that students can experience autonomous robots and bio-signal interface techniques. Thus, this package provides opportunities to design both robot structure and control architecture and, moreover, to experience new engineering technologies. So far, we have conducted robot education courses for undergraduates and graduates three times. The first course purposed to teach students morpho-functionality, which is a concept of embodied artificial intelligence. As results, all the students have designed locomotive robots and understood \"morpho-functionality.\" In the second and third courses, students have experienced to control locomotive robots with bio-signal interface techniques. Thus, we have shown that this educational package provide variety of robot techniques and, depending on course hour and students target, we can modify course programs","PeriodicalId":254129,"journal":{"name":"ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115204621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Evaluation of Mapping with a Tele-operated Robot with Video Feedback 带视频反馈的遥控机器人测绘评价
C. Lundberg, H. Christensen
{"title":"Evaluation of Mapping with a Tele-operated Robot with Video Feedback","authors":"C. Lundberg, H. Christensen","doi":"10.1109/ROMAN.2006.314412","DOIUrl":"https://doi.org/10.1109/ROMAN.2006.314412","url":null,"abstract":"This research has examined robot operators' abilities to gain situational awareness while performing tele-operation with video feedback. The research included a user study in which 20 test persons explored and drew a map of a corridor and several rooms, which they had not visited before. Half of the participants did the exploration and mapping using a teleoperated robot (IRobot PackBot) with video feedback but without being able to see or enter the exploration area themselves. The other half fulfilled the task manually by walking through the premises. The two groups were evaluated regarding time consumption and the rendered maps were evaluated concerning error rate and dimensional and logical accuracy. Dimensional accuracy describes the test person's ability to estimate and reproduce dimensions in the map. Logical accuracy refers to missed, added, misinterpreted, reversed and inconsistent objects or shapes in the depiction. The evaluation showed that fulfilling the task with the robot on average took 96% longer time and rendered 44% more errors than doing it without the robot. Robot users overestimated dimensions with an average of 16% while non-robot users made an average overestimation of 1%. Further, the robot users had a 69% larger standard deviation in their dimensional estimations and on average made 23% more logical errors during the test","PeriodicalId":254129,"journal":{"name":"ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication","volume":"24 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123214411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
What's in the gap? Interaction transitions that make HRI work 缝隙里是什么?使HRI工作的交互转换
H. Hüttenrauch, K. S. Eklundh, A. Green, E. A. Topp, H. Christensen
{"title":"What's in the gap? Interaction transitions that make HRI work","authors":"H. Hüttenrauch, K. S. Eklundh, A. Green, E. A. Topp, H. Christensen","doi":"10.1109/ROMAN.2006.314405","DOIUrl":"https://doi.org/10.1109/ROMAN.2006.314405","url":null,"abstract":"This paper presents an in-depth analysis from a human robot interaction (HRI) study on spatial positioning and interaction episode transitions. Subjects showed a living room to a robot to teach it new places and objects. This joint task was analyzed with respect to organizing strategies for interaction episodes. Noticing the importance of transitions between interaction episodes, small adaptive movements in posture were observed. This finding needs to be incorporated into HRI modules that plan and execute robots' spatial behavior in interaction, e.g., through dynamic adaptation of spatial formations and distances depending on interaction episode","PeriodicalId":254129,"journal":{"name":"ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122951349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Action, State and Effect Metrics for Robot Imitation 机器人模仿的动作、状态和效果度量
A. Alissandrakis, Chrystopher L. Nehaniv, K. Dautenhahn
{"title":"Action, State and Effect Metrics for Robot Imitation","authors":"A. Alissandrakis, Chrystopher L. Nehaniv, K. Dautenhahn","doi":"10.1109/ROMAN.2006.314423","DOIUrl":"https://doi.org/10.1109/ROMAN.2006.314423","url":null,"abstract":"This paper addresses the problem of body mapping in robotic imitation where the demonstrator and imitator may not share the same embodiment (degrees of freedom (DOFs), body morphology, constraints, affordances and so on). Body mappings are formalized using a unified (linear) approach via correspondence matrices, which allow one to capture partial, mirror symmetric, one-to-one, one-to-many, many-to-one and many-to-many associations between various DOFs across dissimilar embodiments. We show how metrics for matching state and action aspects of behaviour can be mathematically determined by such correspondence mappings, which may serve to guide a robotic imitator. The approach is illustrated in a number of examples, using agents described by simple kinematic models and different types of correspondence mappings. Also, focusing on aspects of displacement and orientation of manipulated objects, a selection of metrics are presented, towards a characterization of the space of effect metrics","PeriodicalId":254129,"journal":{"name":"ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116005157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 39
Improvement of Emotion Recognition from Voice by Separating of Obstruents 基于障碍分离的语音情绪识别方法研究
E. Kim, K. Hyun, Y. Kwak
{"title":"Improvement of Emotion Recognition from Voice by Separating of Obstruents","authors":"E. Kim, K. Hyun, Y. Kwak","doi":"10.1109/ROMAN.2006.314449","DOIUrl":"https://doi.org/10.1109/ROMAN.2006.314449","url":null,"abstract":"Previous researchers in the area of emotion recognition have classified emotion from the whole voice. They did not consider that emotion features vary according to the phoneme. Hence, in the present work, we study the characteristics of phonemes in emotion features. Based on the results, we define the obstruents effect, which is a negative effect resulting from increased feature values. We then recognize emotion from the voice by separating obstruents rather than from the whole voice. By separating obstruents, we could improve the emotion recognition rate by about 4.3%","PeriodicalId":254129,"journal":{"name":"ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126083390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Linking Speech and Gesture in Multimodal Instruction Systems 多模态教学系统中语音和手势的连接
J. Wolf, G. Bugmann
{"title":"Linking Speech and Gesture in Multimodal Instruction Systems","authors":"J. Wolf, G. Bugmann","doi":"10.1109/ROMAN.2006.314408","DOIUrl":"https://doi.org/10.1109/ROMAN.2006.314408","url":null,"abstract":"This paper analyses the timing of gesture and speech acts in a corpus (MIBL) of free-flowing human-to-human instruction dialogues. From there, an algorithm is proposed to establish the pairing between speech and gesture of the instructor. It is shown that correct pairing requires timing and semantic information. Further work will explore the use of this algorithm in unconstrained free flowing multimodal instruction dialogues between human and robot. A brief overview of a robotic system is given, that is able to learn a card game from a human teacher","PeriodicalId":254129,"journal":{"name":"ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication","volume":"2014 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128061513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
A General-purpose Transportation Robot An Outline of Work in Progress 一种通用运输机器人。正在进行的工作大纲
M. Wahde, Jimmy Pettersson
{"title":"A General-purpose Transportation Robot An Outline of Work in Progress","authors":"M. Wahde, Jimmy Pettersson","doi":"10.1109/ROMAN.2006.314486","DOIUrl":"https://doi.org/10.1109/ROMAN.2006.314486","url":null,"abstract":"An outline of a current joint project between Chalmers University of Technology (in Sweden) and several Japanese universities (Waseda University, Future University, and the University of Tsukuba) is presented. The aim of the project is to build a general-purpose transportation robot for use in hospitals, industries, and similar facilities. The project will provide a thorough test of the recently developed utility function method for behavior selection, which will be used for generating the decision-making system in the transportation robot. In this paper, an outline of the proposed transportation robot is given, along with a brief description of some of the challenges arising from this project. Furthermore, the utility function method is presented. Finally, the results obtained thus far are briefly discussed, and some directions for further work are provided","PeriodicalId":254129,"journal":{"name":"ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134197811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Human-Robot Interaction Using Affective Cues 使用情感线索的人机交互
Changchun Liu, Pramila Rani, N. Sarkar
{"title":"Human-Robot Interaction Using Affective Cues","authors":"Changchun Liu, Pramila Rani, N. Sarkar","doi":"10.1109/ROMAN.2006.314431","DOIUrl":"https://doi.org/10.1109/ROMAN.2006.314431","url":null,"abstract":"This paper presents a closed loop human-robot interaction framework where a robot can infer the implicit affective cues of the human and respond to them appropriately. Affective cues are inferred by the robot in realtime using psychophysiological analysis where the physiological signals are measured through wearable biofeedback sensors. A robot-based basketball game is designed where a robotic \"coach\" monitors the participant's anxiety to alter the difficulty level of the game in a real-time closed loop manner according to each participant's performance and anxiety. The results are compared with situations when anxiety is not monitored and the game is adapted only according to the performance. Results show that monitoring and responding to affective cues led to higher performance improvement of the majority of the participants under lower anxiety","PeriodicalId":254129,"journal":{"name":"ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115625904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 43
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信