2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)最新文献

筛选
英文 中文
An Efficient Algorithm for Visualization and Interpretation of Grounded Language Models 一种有效的基于语言模型的可视化和解释算法
2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) Pub Date : 2022-08-29 DOI: 10.1109/RO-MAN53752.2022.9900835
Jacob Arkin, Siddharth Patki, J. Rosser, T. Howard
{"title":"An Efficient Algorithm for Visualization and Interpretation of Grounded Language Models","authors":"Jacob Arkin, Siddharth Patki, J. Rosser, T. Howard","doi":"10.1109/RO-MAN53752.2022.9900835","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900835","url":null,"abstract":"Contemporary approaches to grounded language communication accept an utterance and current world representation as input and produce symbols representing the meaning as output. Since modern approaches to language understanding for human-robot interaction use techniques rooted in machine learning, the quality or sensitivity of the solution is often opaque relative to small changes in input. Although it is possible to sample and visualize solutions over a large space of inputs, naïve application of current techniques is often prohibitively expensive for real-time feedback. In this paper we address this problem by reformulating the inference process of Distributed Correspondence Graphs to only recompute subsets of spatially dependent constituent features over a space of sampled environment models. We quantitatively evaluate the speed of inference in physical experiments involving a tabletop robot manipulation scenario. We demonstrate the ability to visualize configurations of the environment where symbol grounding produces consistent solutions in real-time and illustrate how these techniques can be used to identify and repair gaps or inaccuracies in training data.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131929900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Nothing About Us Without Us: a participatory design for an Inclusive Signing Tiago Robot 没有我们就没有我们:包容性签名Tiago机器人的参与式设计
2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) Pub Date : 2022-08-29 DOI: 10.1109/RO-MAN53752.2022.9900538
Emanuele Antonioni, Cristiana Sanalitro, O. Capirci, Alessio Di Renzo, Maria Beatrice D'Aversa, D. Bloisi, Lun Wang, Ermanno Bartoli, Lorenzo Diaco, V. Presutti, D. Nardi
{"title":"Nothing About Us Without Us: a participatory design for an Inclusive Signing Tiago Robot","authors":"Emanuele Antonioni, Cristiana Sanalitro, O. Capirci, Alessio Di Renzo, Maria Beatrice D'Aversa, D. Bloisi, Lun Wang, Ermanno Bartoli, Lorenzo Diaco, V. Presutti, D. Nardi","doi":"10.1109/RO-MAN53752.2022.9900538","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900538","url":null,"abstract":"The success of the interaction between the robotics community and the users of these services is an aspect of considerable importance in the drafting of the development plan of any technology. This aspect becomes even more relevant when dealing with sensitive services and issues such as those related to interaction with specific subgroups of any population. Over the years, there have been few successes in integrating and proposing technologies related to deafness and sign language. Instead, in this paper, we propose an account of successful interaction between a signatory robot and the Italian deaf community, which occurred during the Smart City Robotics Challenge (SciRoc) 2021 competition1. Thanks to the use of a participatory design and the involvement of experts belonging to the deaf community from the early stages of the project, it was possible to create a technology that has achieved significant results in terms of acceptance by the community itself and could lead to significant results in the technology development as well.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114201449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Motivational Gestures in Robot-Assisted Language Learning: A Study of Cognitive Engagement using EEG Brain Activity 机器人辅助语言学习中的动机手势:基于脑电图的认知参与研究
2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) Pub Date : 2022-08-29 DOI: 10.1109/RO-MAN53752.2022.9900508
M. Alimardani, Jishnu Harinandansingh, Lindsey Ravin, M. Haas
{"title":"Motivational Gestures in Robot-Assisted Language Learning: A Study of Cognitive Engagement using EEG Brain Activity","authors":"M. Alimardani, Jishnu Harinandansingh, Lindsey Ravin, M. Haas","doi":"10.1109/RO-MAN53752.2022.9900508","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900508","url":null,"abstract":"Social robots have been shown effective in pedagogical settings due to their embodiment and social behavior that can improve a learner’s motivation and engagement. In this study, the impact of a social robot’s motivational gestures in robot-assisted language learning (RALL) was investigated. Twenty-five university students participated in a language learning task tutored by a NAO robot under two conditions (within-subjects design); in one condition the robot provided positive and negative feedback on participant’s performance using both verbal and non-verbal behavior (Gesture condition), in another condition the robot only employed verbal feedback (No-Gesture condition). To assess cognitive engagement and learning in each condition, we collected EEG brain activity from the participants during the interaction and evaluated their word knowledge during an immediate and delayed post-test. No significant difference was found with respect to cognitive engagement as quantified by the EEG Engagement Index during the practice phase. Similarly, the word test results indicated an overall high performance in both conditions, suggesting similar learning gain regardless of the robot’s gestures. These findings do not provide evidence in favor of robot’s motivational gestures during language learning tasks but at the same time indicate challenges with respect to the design of effective social behavior for pedagogical robots.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114600426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Robots for Connection: A Co-Design Study with Adolescents 连接机器人:与青少年的共同设计研究
2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) Pub Date : 2022-08-29 DOI: 10.1109/RO-MAN53752.2022.9900534
Patrícia Alves-Oliveira, Elin A. Björling, Patriya Wiesmann, Heba Dwikat, S. Bhatia, Kai Mihata, M. Cakmak
{"title":"Robots for Connection: A Co-Design Study with Adolescents","authors":"Patrícia Alves-Oliveira, Elin A. Björling, Patriya Wiesmann, Heba Dwikat, S. Bhatia, Kai Mihata, M. Cakmak","doi":"10.1109/RO-MAN53752.2022.9900534","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900534","url":null,"abstract":"Adolescents isolated at home during the COVID19 pandemic lockdown are more likely to feel lonely and in need of social connection. Social robots may provide a much needed social interaction without the risk of contracting an infection. In this paper, we detail our co-design process used to engage adolescents in the design of a social robot prototype intended to broadly support their mental health. Data gathered from our four week design study of nine remote sessions and interviews with 16 adolescents suggested the following design requirements for a home robot: (1) be able to enact a set of roles including a coach, companion, and confidant; (2) amplify human-to-human connection by supporting peer relationships; (3) account for data privacy and device ownership. Design materials are available in open-access, contributing to best practices for the field of Human-Robot Interaction.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"380 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116058662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Leveraging Cognitive States in Human-Robot Teaming 在人机合作中利用认知状态
2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) Pub Date : 2022-08-29 DOI: 10.1109/RO-MAN53752.2022.9900794
Jack Kolb, H. Ravichandar, S. Chernova
{"title":"Leveraging Cognitive States in Human-Robot Teaming","authors":"Jack Kolb, H. Ravichandar, S. Chernova","doi":"10.1109/RO-MAN53752.2022.9900794","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900794","url":null,"abstract":"Mixed human-robot teams (HRTs) have the potential to perform complex tasks by leveraging diverse and complementary capabilities within the team. However, assigning humans to operator roles in HRTs is challenging due to the significant variation in user capabilities. While much of prior work in role assignment treats humans as interchangeable (either generally or within a category), we investigate the utility of personalized models of operator capabilities based in relevant human factors in an effort to improve overall team performance. We call this approach individualized role assignment (IRA) and provide a formal definition. A key challenge for IRA is associated with the fact that factors that affect human performance are not static (e.g., one’s ability to track multiple objects can change during or between tasks). Instead of relying on time-consuming and highly-intrusive measurements taken during the execution of tasks, we propose the use of short cognitive tests, taken before engaging in human-robot tasks, and predictive models of individual performance to perform IRA. Results from a comprehensive user study conclusively demonstrate that IRA leads to significantly better team performance than a baseline method that assumes human operators are interchangeable, even when we control for the influence of the robots’ performance. Further, our results point to the possibility that such relative benefits of IRA will increase as the number of operators (i.e., choices) increase for a fixed number of tasks.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115146761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Action Unit Generation through Dimensional Emotion Recognition from Text 基于文本维度情感识别的动作单元生成
2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) Pub Date : 2022-08-29 DOI: 10.1109/RO-MAN53752.2022.9900535
Benedetta Bucci, Alessandra Rossi, Silvia Rossi
{"title":"Action Unit Generation through Dimensional Emotion Recognition from Text","authors":"Benedetta Bucci, Alessandra Rossi, Silvia Rossi","doi":"10.1109/RO-MAN53752.2022.9900535","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900535","url":null,"abstract":"Expressiveness is a critical feature for the communication between humans and robots, and it helps humans to better understand and accept a robot. Emotions can be expressed through a variety of modalities: kinesthetic (via facial expression), body posture and gestures, auditory, thus the acoustic features of speech, and semantic, thus the content of what is said. One of the most effective modalities to communicate emotions is through facial expressions. Social robots often show facial expressions with coded animations. However, the robot must be able to express appropriate emotional responses according to the interaction with people. In this work, we consider verbal interactions between humans and robots and propose a system composed of two modules for the generation of facial emotions by recognising the arousal and valence values of a written sentence. The first module, based on Bidirectional Encoder Representations from Transformers, is deployed for emotion recognition in a sentence. The second, an Auxiliary Classifier Generative Adversarial Network, is proposed for the generation of facial movements for expressing the recognised emotion in terms of valence and arousal.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"379 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122175907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Domestic Social Robots as Companions or Assistants? The Effects of the Robot Positioning on the Consumer Purchase Intentions* 家用社交机器人是伴侣还是助手?机器人定位对消费者购买意愿的影响*
2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) Pub Date : 2022-08-29 DOI: 10.1109/RO-MAN53752.2022.9900844
Jun San Kim, Dahyun Kang, Jongsuk Choi, Sonya S. Kwak
{"title":"Domestic Social Robots as Companions or Assistants? The Effects of the Robot Positioning on the Consumer Purchase Intentions*","authors":"Jun San Kim, Dahyun Kang, Jongsuk Choi, Sonya S. Kwak","doi":"10.1109/RO-MAN53752.2022.9900844","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900844","url":null,"abstract":"This study explores the effects of the positioning strategy of domestic social robots on the purchase intention of consumers. Specifically, the authors investigate the effects of robot positioning as companions with as assistants and as appliances. The study results showed that the participants preferred the domestic social robots positioned as assistants rather than as companions. Moreover, for male participants, the positioning of domestic social robots as appliances was also preferred over robots positioned as companions. The study results also showed that the effects of positioning on the purchase intention were mediated by the participants’ perception of usefulness regarding the robot.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"17 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125273045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The LMA12-O Framework for Emotional Robot Eye Gestures 情感机器人眼睛手势的LMA12-O框架
2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) Pub Date : 2022-08-29 DOI: 10.1109/RO-MAN53752.2022.9900752
Kerl Galindo, Deborah Szapiro, R. Gomez
{"title":"The LMA12-O Framework for Emotional Robot Eye Gestures","authors":"Kerl Galindo, Deborah Szapiro, R. Gomez","doi":"10.1109/RO-MAN53752.2022.9900752","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900752","url":null,"abstract":"The eyes play a significant role in how robots are perceived socially by humans due to the eye’s centrality in human communication. To date there has been no consistent or reliable system for designing and transferring affective emotional eye gestures to anthropomorphized social robots. Combining research findings from Oculesics, Laban Movement Analysis and the Twelve Principles of Animation, this paper discusses the design and evaluation of the prototype LMA12-O framework for the purpose of maximising the emotive communication potential of eye gestures in anthropomorphized social robots. Results of initial user testings evidenced LMA12-O to be effective in designing affective emotional eye gestures in the test robot with important considerations for future iterations of this framework.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"16 11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125783437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Spatio-Temporal Action Order Representation for Mobile Manipulation Planning* 移动操作规划的时空动作顺序表示*
2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) Pub Date : 2022-08-29 DOI: 10.1109/RO-MAN53752.2022.9900643
Yosuke Kawasaki, Masaki Takahashi
{"title":"Spatio-Temporal Action Order Representation for Mobile Manipulation Planning*","authors":"Yosuke Kawasaki, Masaki Takahashi","doi":"10.1109/RO-MAN53752.2022.9900643","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900643","url":null,"abstract":"Social robots are used to perform mobile manipulation tasks, such as tidying up and carrying, based on instructions provided by humans. A mobile manipulation planner, which is used to exploit the robot’s functions, requires a better understanding of the feasible actions in real space based on the robot’s subsystem configuration and the object placement in the environment. This study aims to realize a mobile manipulation planner considering the world state, which consists of the robot state (subsystem configuration and their state) required to exploit the robot’s functions. In this paper, this study proposes a novel environmental representation called a world state-dependent action graph (WDAG). The WDAG represents the spatial and temporal order of feasible actions based on the world state by adopting the knowledge representation with scene graphs and a recursive multilayered graph structure. The study also proposes a mobile manipulation planning method using the WDAG. The planner enables the derivation of many effective action sequences to accomplish the given tasks based on an exhaustive understanding of the spatial and temporal connections of actions. The effectiveness of the proposed method is evaluated through practical machine experiments performed. The experimental result demonstrates that the proposed method facilitates the effective utilization of the robot’s functions.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129211046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Listen and tell me who the user is talking to: Automatic detection of the interlocutor’s type during a conversation 听并告诉我用户在和谁说话:在对话过程中自动检测对话者的类型
2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) Pub Date : 2022-08-29 DOI: 10.1109/RO-MAN53752.2022.9900632
Youssef Hmamouche, M. Ochs, T. Chaminade, Laurent Prévot
{"title":"Listen and tell me who the user is talking to: Automatic detection of the interlocutor’s type during a conversation","authors":"Youssef Hmamouche, M. Ochs, T. Chaminade, Laurent Prévot","doi":"10.1109/RO-MAN53752.2022.9900632","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900632","url":null,"abstract":"In the well-known Turing test, humans have to judge whether they write to another human or a chatbot. In this article, we propose a reversed Turing test adapted to live conversations: based on the speech of the human, we have developed a model that automatically detects whether she/he speaks to an artificial agent or a human. We propose in this work a prediction methodology combining a step of specific features extraction from behaviour and a specific deep learning model based on recurrent neural networks. The prediction results show that our approach, and more particularly the considered features, improves significantly the predictions compared to the traditional approach in the field of automatic speech recognition systems, which is based on spectral features, such as Mel-frequency Cepstral Coefficients (MFCCs). Our approach allows evaluating automatically the type of conversational agent, human or artificial agent, solely based on the speech of the human interlocutor. Most importantly, this model provides a novel and very promising approach to weigh the importance of the behaviour cues used to make correctly recognize the nature of the interlocutor, in other words, what aspects of the human behaviour adapts to the nature of its interlocutor.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130861997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信