Ayberk Ozgur, Stéphane Bonardi, Massimo Vespignani, R. Moeckel, A. Ijspeert
{"title":"Natural user interface for Roombots","authors":"Ayberk Ozgur, Stéphane Bonardi, Massimo Vespignani, R. Moeckel, A. Ijspeert","doi":"10.1109/ROMAN.2014.6926223","DOIUrl":"https://doi.org/10.1109/ROMAN.2014.6926223","url":null,"abstract":"Roombots (RB) are self-reconfigurable modular robots designed to study robotic reconfiguration on a structured grid and adaptive locomotion off grid. One of the main goals of this platform is to create adaptive furniture inside living spaces such as homes or offices. To ease the control of RB modules in these environments, we propose a novel and more natural way of interaction with the RB modules on a RB grid, called the Natural Roombots User Interface. In our method, the user commands the RB modules using pointing gestures. The user's body is tracked using multiple Kinects. The user is also given real-time visual feedback of their physical actions and the state of the system via LED illumination electronics installed on both RB modules and the grid. We demonstrate how our interface can be used to efficiently control RB modules on simple point-to-point grid locomotion and conclude by discussing future extensions.","PeriodicalId":235810,"journal":{"name":"The 23rd IEEE International Symposium on Robot and Human Interactive Communication","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131236595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Detecting occluded people for robotic guidance","authors":"E. Martinson","doi":"10.1109/ROMAN.2014.6926342","DOIUrl":"https://doi.org/10.1109/ROMAN.2014.6926342","url":null,"abstract":"Often overlooked in human-robot interaction is the challenge of people detection. For natural interaction, a robot must detect people without waiting for them to face the camera, get far enough away to be fully present, or center themselves fully within the field of view. Furthermore, it must happen without requiring immense amounts of processing that are not practical for real systems. In this work we focus on person detection in a guidance scenario, where occlusion is particularly prevalent. Using a layered approach with depth images, we can substantially improve detection rates under high levels of occlusion, and enable a robot to detect a target that is moving into and out of the field of view.","PeriodicalId":235810,"journal":{"name":"The 23rd IEEE International Symposium on Robot and Human Interactive Communication","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130848571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jongwon Park, Young Kook Kim, Byungho Yoon, Kyung-Soo Kim, Soohyun Kim
{"title":"Design, analysis and simulation of biped running robot","authors":"Jongwon Park, Young Kook Kim, Byungho Yoon, Kyung-Soo Kim, Soohyun Kim","doi":"10.1109/ROMAN.2014.6926233","DOIUrl":"https://doi.org/10.1109/ROMAN.2014.6926233","url":null,"abstract":"Legged systems are potentially expected to have outstanding mobility. However, they have shown only modest levels of capability to traverse on flat ground. This paper introduces a biped robot inspired by the hind limbs of cats. We present the characteristics of the system and conduct a position analysis based on vector loop equations. Afterwards, the ground clearance and parallel movement of the robotic leg are shown. We present a speed equation in an effort to verify how the major parameters affect the speed. In addition, we explore control strategies for ground speed matching, acceleration, slip, and speed control. A dynamic simulation shows that the biped robot reached 13.31 leg lengths per second (9.3km/h). This biped robot with the speed equation and its control strategies allow us to understand legged locomotion and can show us how to improve the speed.","PeriodicalId":235810,"journal":{"name":"The 23rd IEEE International Symposium on Robot and Human Interactive Communication","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115929278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bradley Hayes, D. Ullman, Emma Alexander, Caroline Bank, B. Scassellati
{"title":"People help robots who help others, not robots who help themselves","authors":"Bradley Hayes, D. Ullman, Emma Alexander, Caroline Bank, B. Scassellati","doi":"10.1109/ROMAN.2014.6926262","DOIUrl":"https://doi.org/10.1109/ROMAN.2014.6926262","url":null,"abstract":"Robots that engage in social behaviors benefit greatly from possessing tools that allow them to manipulate the course of an interaction. Using a non-anthropomorphic social robot and a simple counting game, we examine the effects that empathy-generating robot dialogue has on participant performance across three conditions. In the self-directed condition, the robot petitions the participant to reduce his or her performance so that the robot can avoid punishment. In the externally-directed condition, the robot petitions on behalf of its programmer so that its programmer can avoid punishment. The control condition does not involve any petitions for empathy. We find that externally-directed petitions from the robot show a higher likelihood of motivating the participant to sacrifice his or her own performance to help, at the expense of incurring negative social effects. We also find that experiencing these emotional dialogue events can have complex and difficult to predict effects, driving some participants to antipathy, leaving some unaffected, and manipulating others into feeling empathy towards the robot.","PeriodicalId":235810,"journal":{"name":"The 23rd IEEE International Symposium on Robot and Human Interactive Communication","volume":"195 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124343043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Gestural behavioral implementation on a humanoid robotic platform for effective social interaction","authors":"LaVonda Brown, A. Howard","doi":"10.1109/ROMAN.2014.6926297","DOIUrl":"https://doi.org/10.1109/ROMAN.2014.6926297","url":null,"abstract":"The role of emotions in social scenarios is to provide an inherent mode of communication between two parties. When emotions are properly employed and understood, people are able to respond appropriately, which further enhances the social interaction. Ultimately, effective emotion execution in social settings has the capability to build rapport, improve engagement, optimize learning, provide comfort, and increase overall likability. In this paper, we discuss associating dominant emotions of effective social interaction to gestural behaviors on a humanoid robotic platform. Studies with 13 participants interacting with the robot show that by integrating key principles related to the characteristics of happy and sad emotions, the intended emotion is perceived across all participants with 95.19% and 94.23% sensitivity, respectively.","PeriodicalId":235810,"journal":{"name":"The 23rd IEEE International Symposium on Robot and Human Interactive Communication","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129861633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Expressive motion with x, y and theta: Laban Effort Features for mobile robots","authors":"H. Knight, R. Simmons","doi":"10.1109/ROMAN.2014.6926264","DOIUrl":"https://doi.org/10.1109/ROMAN.2014.6926264","url":null,"abstract":"There is a saying that 95% of communication is body language, but few robot systems today make effective use of that ubiquitous channel. Motion is an essential area of social communication that will enable robots and people to collaborate naturally, develop rapport, and seamlessly share environments. The proposed work presents a principled set of motion features based on the Laban Effort system, a widespread and extensively tested acting ontology for the dynamics of “how” we enact motion. The features allow us to analyze and, in future work, generate expressive motion using position (x, y) and orientation (theta). We formulate representative features for each Effort and parameterize them on expressive motion sample trajectories collected from experts in robotics and theater. We then produce classifiers for different “manners” of moving and assess the quality of results by comparing them to the humans labeling the same set of paths on Amazon Mechanical Turk. Results indicate that the machine analysis (41.7% match between intended and classified manner) achieves similar accuracy overall compared to a human benchmark (41.2% match). We conclude that these motion features perform well for analyzing expression in low degree of freedom systems and could be used to help design more effectively expressive mobile robots.","PeriodicalId":235810,"journal":{"name":"The 23rd IEEE International Symposium on Robot and Human Interactive Communication","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130780272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"When a robot orients visitors to an exhibit. Referential practices and interactional dynamics in real world HRI","authors":"K. Pitsch, S. Wrede","doi":"10.1109/ROMAN.2014.6926227","DOIUrl":"https://doi.org/10.1109/ROMAN.2014.6926227","url":null,"abstract":"A basic task for robots interacting with humans consists in guiding their focus of attention. Existing guidelines for a robot's multimodal deixis are primarily focused on the speaker (talk-gesture-coordination, handshape). Conducting a field trial with a museum guide robot, we tested these individualistic referential strategies in the dynamic conditions of real-world HRI and found that their success ranges between 27% and 95%. Qualitative video-based micro-analysis revealed that the users experienced problems when they were not facing the robot at the moment of the deictic gesture. Also the importance of the robot's head orientation became evident. Implications are drawn as design guidelines for an interactional account of modeling referential strategies for HRI.","PeriodicalId":235810,"journal":{"name":"The 23rd IEEE International Symposium on Robot and Human Interactive Communication","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128998012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yeonsub Jin, Hanyong Chun, Euntai Kim, Sungchul Kang
{"title":"VT-ware: A wearable tactile device for upper extremity motion guidance","authors":"Yeonsub Jin, Hanyong Chun, Euntai Kim, Sungchul Kang","doi":"10.1109/ROMAN.2014.6926275","DOIUrl":"https://doi.org/10.1109/ROMAN.2014.6926275","url":null,"abstract":"In this study, we developed and evaluated a tactile stimulation device for upper extremity motion guidance. The developed device stimulates skin pressing directly using “tapping.” A minimal number of actuators are used in the tactile stimulation device that is worn on the wrist. The device consists of six Tiny Ultrasonic Linear Actuator (TULA) modules, a control circuit, an upper case, and a lower case. We estimated motions through kinematic analysis of the upper extremities for motion guidance and our driving algorithm applied a tactile illusion to generate directional information cues and tapped one point using a tactile stimulation device to guide upper extremity motion. To evaluate the developed device, an experiment was conducted to test whether directional information can be successfully displayed by the device. As a result, it was found that the directional information cues could be reliably conveyed through the wrist with tactile stimulation using a “tapping” method that is based on tactile illusion, though the number of actuators that display continuous tactile stimulation is limited.","PeriodicalId":235810,"journal":{"name":"The 23rd IEEE International Symposium on Robot and Human Interactive Communication","volume":"333 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122324179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Williams, Priscilla Briggs, N. Pelz, Matthias Scheutz
{"title":"Is robot telepathy acceptable? Investigating effects of nonverbal robot-robot communication on human-robot interaction","authors":"T. Williams, Priscilla Briggs, N. Pelz, Matthias Scheutz","doi":"10.1109/ROMAN.2014.6926365","DOIUrl":"https://doi.org/10.1109/ROMAN.2014.6926365","url":null,"abstract":"Recent research indicates that other factors in addition to appearance may contribute to the “Uncanny Valley” effect, and it is possible that “uncanny actions” such as “robot telepathy” - the nonverbal exchange of information among multiple robots - could be one such factor. We thus specifically examine whether humans are negatively affected by displays of nonverbal robot-robot communication through a disaster relief scenario in which one robot must relay information from a human participant to another robot in order to successfully complete a task. Our results showed no significant difference between the verbal and nonverbal communication strategies, thus suggesting that “telepathic information transmission” is acceptable. However, we also found several unexplained robot-specific effects, prompting future follow-up studies to determine their causes and the extent to which these effects might impact human perception and acceptance of robot communication strategies.","PeriodicalId":235810,"journal":{"name":"The 23rd IEEE International Symposium on Robot and Human Interactive Communication","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126103491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A novel robotic neck for realizing an anatomically accurate android head targeting facial perception research","authors":"Edgar Flores, S. Fels","doi":"10.1109/ROMAN.2014.6926254","DOIUrl":"https://doi.org/10.1109/ROMAN.2014.6926254","url":null,"abstract":"we describe a novel robotic neck mechanism that supports realistic human head motion. Our design uses a 3-DOF spherical neck inspired by the 2-DOF spherical wrist of the Orthoglide 5-axis industrial robot. We use a gimbal-like mechanism to combine three 1-DOF motion components to rotate the head about a common point and around the three principal axes. Based on this design, we implemented and compared our neck in an android called Uma using human expressive neck motion specifications to ensure it is capable of human-like motion. Based on our evaluations, the neck has been shown to be suitable for perception experiments that require human-like head motion.","PeriodicalId":235810,"journal":{"name":"The 23rd IEEE International Symposium on Robot and Human Interactive Communication","volume":"171 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122917467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}