2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)最新文献

筛选
英文 中文
Inferring affective states from observation of a robot's simple movements 通过观察机器人的简单动作推断情感状态
Genta Yoshioka, Takafumi Sakamoto, Yugo Takeuchi
{"title":"Inferring affective states from observation of a robot's simple movements","authors":"Genta Yoshioka, Takafumi Sakamoto, Yugo Takeuchi","doi":"10.1109/ROMAN.2015.7333582","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333582","url":null,"abstract":"This paper reports an analytic finding in which humans inferred the emotional states of a simple, flat robot that only moves autonomously on a floor in all directions based on Russell's circumplex model of affect that depends on human's spatial position. We observed the physical interaction between humans and a robot through an experiment where our participants seek a treasure in the given field, and the robot expresses its affective state by movements. This result will contribute to the basic design of HRI. The robot only showed its internal state using its simple movements.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124966321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A case study of an automatic volume control interface for a telepresence system 远程呈现系统自动音量控制界面的案例研究
Masaaki Takahashi, Masa Ogata, M. Imai, Keisuke Nakamura, K. Nakadai
{"title":"A case study of an automatic volume control interface for a telepresence system","authors":"Masaaki Takahashi, Masa Ogata, M. Imai, Keisuke Nakamura, K. Nakadai","doi":"10.1109/ROMAN.2015.7333605","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333605","url":null,"abstract":"The study of the telepresence robot as a tool for telecommunication from a remote location is attracting a considerable amount of attention. However, the problem arises that a telepresence robot system does not allow the volume of the user's utterance to be adjusted precisely, because it does not consider varying conditions in the sound environment, such as noise. In addition, when talking with several people in remote location, the user would like to be able to change the speaker volume freely according to the situation. In a previous study, a telepresence robot was proposed that has a function that automatically regulates the volume of the user's utterance. However, the manner in which the user exploits this function in a practical situation needs to be investigated. We propose a telepresence conversation robot system called “TeleCoBot.” TeleCoBot includes an operator's user interface, through which the volume of the user's utterance can be automatically regulated according to the distance between the robot and the conversation partner and the noise level in the robot's environment. We conducted a case study, in which the participants played a game using TeleCoBot's interface. The results of the study reveal the manner in which the participants used TeleCoBot and the additional factors that the system requires.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125884777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Investigating the effects of robot behavior and attitude towards technology on social human-robot interactions 研究机器人的行为和对技术的态度对人类与机器人社会互动的影响
V. Nitsch, Thomas Glassen
{"title":"Investigating the effects of robot behavior and attitude towards technology on social human-robot interactions","authors":"V. Nitsch, Thomas Glassen","doi":"10.1109/ROMAN.2015.7333560","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333560","url":null,"abstract":"Many envision a future in which personal service robots share our homes and take part in our daily lives. These robots should possess a certain “social intelligence”, so that people are willing, if not eager, to interact with them. In this endeavor, applied psychologists and roboticists have conducted numerous studies to identify the factors that affect social interactions between humans and robots, both positively and negatively. In order to ascertain the extent to which the social human-robot interaction might be influenced by robot behavior and a person's attitude towards technology, an experiment was conducted using the UG paradigm, in which participants (N=48) interacted with a robot, which displayed either animated or apathetic behavior. The results suggest that although the interaction with a robot displaying animated behavior is overall rated more favorably, people may nevertheless act differently towards such robots, depending on their perceived technological competence and their enthusiasm for technology.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126196399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Sequential intention estimation of a mobility aid user for intelligent navigational assistance 智能导航辅助移动辅助用户的顺序意图估计
Takamitsu Matsubara, J. V. Miró, Daisuke Tanaka, James Poon, Kenji Sugimoto
{"title":"Sequential intention estimation of a mobility aid user for intelligent navigational assistance","authors":"Takamitsu Matsubara, J. V. Miró, Daisuke Tanaka, James Poon, Kenji Sugimoto","doi":"10.1109/ROMAN.2015.7333580","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333580","url":null,"abstract":"This paper proposes an intelligent mobility aid framework aimed at mitigating the impact of cognitive and/or physical user deficiencies by performing suitable mobility assistance with minimum interference. To this end, a user action model using Gaussian Process Regression (GPR) is proposed to encapsulate the probabilistic and nonlinear relationships among user action, state of the environment and user intention. Moreover, exploiting the analytical tractability of the predictive distribution allows a sequential Bayesian process for user intention estimation to take place. The proposed scheme is validated on data obtained in an indoor setting with an instrumented robotic wheelchair augmented with sensorial feedback from the environment and user commands as well as proprioceptive information from the actual vehicle, achieving accuracy in near real-time of ~80%. The initial results are promising and indicating the suitability of the process to infer user driving behaviors within the context of ambulatory robots designed to provide assistance to users with mobility impairments while carrying out regular daily activities.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127220415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Talking-Ally: What is the future of robot's utterance generation? Talking-Ally:机器人语音生成的未来是什么?
Hitomi Matsushita, Yohei Kurata, P. R. D. De Silva, M. Okada
{"title":"Talking-Ally: What is the future of robot's utterance generation?","authors":"Hitomi Matsushita, Yohei Kurata, P. R. D. De Silva, M. Okada","doi":"10.1109/ROMAN.2015.7333603","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333603","url":null,"abstract":"It is still an enormous challenge within the HRI community to make a significant contribution to the development of a robot's utterance generation mechanism. How does one actually go about contributing and predicting the future of robot utterance generation? Since, our motivation to propose a robot's utterance generation approach by utilizing both addressivity and hearership. Novel platform of Talking-Ally is capable of producing an utterance (toward addressivity) by utilizing the state of the hearer's behaviors (eye-gaze information) to persuade the user (states of hearership) through dynamic interaction. Moreover, the robot has the potential to manipulate modality, turn-initial, and entrust behaviors to increase the liveliness of conversations, which are facilitated by shifting the direction of the conversation and maintaining the hearer's engagement in the conversation. Our experiment focuses on evaluating how interactive users engage with an utterance generation approach (performance) and the persuasive power of robot's communication within dynamic interactions.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114377969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Robot watchfulness hinders learning performance 机器人的监视会阻碍学习表现
Jonathan S. Herberg, S. Feller, Ilker Yengin, Martin Saerbeck
{"title":"Robot watchfulness hinders learning performance","authors":"Jonathan S. Herberg, S. Feller, Ilker Yengin, Martin Saerbeck","doi":"10.1109/ROMAN.2015.7333620","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333620","url":null,"abstract":"Educational technological applications, such as computerized learning environments and robot tutors, are often programmed to provide social cues for the purposes of facilitating natural interaction and enhancing productive outcomes. However, there can be potential costs to social interactions that could run counter to such goals. Here, we present an experiment testing the impact of a watchful versus non-watchful robot tutor on children's language-learning effort and performance. Across two interaction sessions, children learned French and Latin rules from a robot tutor and filled in worksheets applying the rules to translate phrases. Results indicate better performance on the worksheets in the session in which the robot looked away from, as compared to the session it looked toward the child, as the child was filling in the worksheets. This was the case in particular for the more difficult worksheet items. These findings highlight the need for careful implementation of social robot behaviors to avoid counterproductive effects.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114507216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Effects of interaction and appearance on subjective impression of robots 交互和外观对机器人主观印象的影响
Keisuke Nonomura, K. Terada, A. Ito, S. Yamada
{"title":"Effects of interaction and appearance on subjective impression of robots","authors":"Keisuke Nonomura, K. Terada, A. Ito, S. Yamada","doi":"10.1109/ROMAN.2015.7333577","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333577","url":null,"abstract":"Human-interactive robots are assessed according to various factors, such as behavior, appearance, and quality of interaction. In the present study, we investigated the hypothesis that impressions of an unattractive robot will be improved by emotional interaction with physical touch with the robot. An experiment with human subjects confirmed that the evaluations of the intimacy factor of unattractive robots were improved after two minutes of physical and emotional interaction with such robots.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127656715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Conscious/unconscious emotional dialogues in typical children in the presence of an InterActor Robot 典型儿童在互动机器人面前的有意识/无意识情感对话
I. Giannopulu, Tomio Watanabe
{"title":"Conscious/unconscious emotional dialogues in typical children in the presence of an InterActor Robot","authors":"I. Giannopulu, Tomio Watanabe","doi":"10.1109/ROMAN.2015.7333575","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333575","url":null,"abstract":"In the present interdisciplinary study, we have combined cognitive neuroscience knowledge, psychiatry and engineering knowledge with the aim to analyze emotion, language and un/consciousness in children aged 6 (n=20) and 9 (n=20) years via a listener-speaker communication. The speaker was always a child; the listener was a Human InterActor or a Robot InterActor, i.e.,. a small robot which reacts to speech expression by nodding only. Unconscious nonverbal emotional expression associated with physiological data (heart rate) as well as conscious process related to behavioral data (number of nouns and verbs in addition reported feelings) were considered. The results showed that 1) the heart rate was higher for children aged 6 years old than for children aged 9 years old when the InterActor was the robot; 2) the number of words (nouns and verbs) expressed by both age groups was higher when the InterActor was a human. It was lower for the children aged 6 years than for the children aged 9 years. Even if a difference of consciousness exists amongst the two groups, everything happens as if the InterActor Robot would allow children to elaborate a multivariate equation encoding and conceptualizing within their brain, and externalizing into unconscious nonverbal emotional behavior i.e., automatic activity. The Human InterActor would allow children to externalize the elaborated equation into conscious verbal behavior (words), i.e., controlled activity. Unconscious and conscious processes would not only depend on natural environments but also on artificial environments such as robots.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"121 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121043249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Constraints on freely chosen action for moral robots: Consciousness and control 道德机器人自由选择行为的约束:意识和控制
P. Bello, John Licato, S. Bringsjord
{"title":"Constraints on freely chosen action for moral robots: Consciousness and control","authors":"P. Bello, John Licato, S. Bringsjord","doi":"10.1109/ROMAN.2015.7333654","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333654","url":null,"abstract":"The protean word `autonomous' has gained broad currency as a descriptive adjective for AI research projects, robotic and otherwise. Depending upon context, `autonomous' at present connotes anything from a shallow, purely reactive system to a sophisticated cognitive architecture reflective of much of human cognition; hence the term fails to pick out any specific set of constitutive functionality. However, philosophers and ethicists have something relatively well-defined in mind when they talk about the idea of autonomy. For them, an autonomous agent is often by definition potentially morally responsible for its actions. Moreover, as a prerequisite to correct ascription of `autonomous,' a certain capacity to choose freely is assumed - even if this freedom is understood to be semi-constrained by societal conventions, moral norms, and the like.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128813235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A novel 4 DOF eye-camera positioning system for Androids 一种新颖的4 DOF眼相机定位系统
Edgar Flores, S. Fels
{"title":"A novel 4 DOF eye-camera positioning system for Androids","authors":"Edgar Flores, S. Fels","doi":"10.1109/ROMAN.2015.7333608","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333608","url":null,"abstract":"We present a novel eye-camera positioning system with four degrees-of-freedom (DOF). The system has been designed to emulate human eye movements, including saccades, for anatomically accurate androids. The architecture of our system is similar to that of a universal joint in that a hollowed sphere (the eyeball), hosting a miniature CMOS color camera, takes the part of the cross shaft that connects a pair of hinges that are oriented at 90 degrees of each other. This concept allows the motors to remain static, enabling placing them in multiple configurations during the mechanical design stage facilitating the inclusion of other robotic parts into the robots head. Based on our evaluations, the robotic eye-camera has been shown to be suitable for perception experiments that require human-like eye motion.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117208659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信