{"title":"A hybrid model for gesture recognition and speech synchronization","authors":"Umberto Maniscalco, A. Messina, P. Storniolo","doi":"10.1109/ARSO56563.2023.10187512","DOIUrl":"https://doi.org/10.1109/ARSO56563.2023.10187512","url":null,"abstract":"Gestures should be considered an integral part of the language. Non-verbal communication integrates, enriches, and sometimes wholly replaces language. Therefore, in the anthropomorphization process of the human-robot interaction or human-machine interaction, one cannot ignore the integration of the auditory and visual canals. This article presents a model for gesture recognition and its synchronization with speech. We imagine a scenario where the human being interacts with the robotic agent, which knows the organization of the space around it and the disposition of the objects, in natural language, indicating the things he intends to refer to. The model recognizes the stroke-hold typical of the deictic gesture and identifies the word or words corresponding to the gesture in the user's sentence. The purpose of the model is to replace any adjective and demonstrative pronoun or indical expressions with spatial information helpful in recognizing the intent of the sentence. We have built a development and simulation framework based on a web interface to test the system. The first results were very encouraging. The model has been shown to work well in real-time, with reasonable success rates on the assigned task.","PeriodicalId":382832,"journal":{"name":"2023 IEEE International Conference on Advanced Robotics and Its Social Impacts (ARSO)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127232152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Robin Gigandet, Xénia Dutoit, Bing-chuan Li, Maria C. Diana, T. Nazir
{"title":"The “Eve effect bias”: Epistemic Vigilance and Human Belief in Concealed Capacities of Social Robots","authors":"Robin Gigandet, Xénia Dutoit, Bing-chuan Li, Maria C. Diana, T. Nazir","doi":"10.1109/ARSO56563.2023.10187469","DOIUrl":"https://doi.org/10.1109/ARSO56563.2023.10187469","url":null,"abstract":"Artificial social agents (ASAs) are gaining popularity, but reports suggest that humans don't always coexist harmoniously with them. This exploratory study examined whether humans pay attention to cues of falsehood or deceit when interacting with ASAs. To infer such epistemic vigilance, participants' N400 brain signals were analyzed in response to discrepancies between a robot's physical appearance and its speech, and ratings were collected for statements about the robot's cognitive ability. First results suggest that humans do exhibit epistemic vigilance, as evidenced 1) by a more pronounced N400 component when participants heard sentences contradicting the robot's physical abilities and 2) by overall lower rating scores for the robot's cognitive abilities. However, approximately two-thirds of participants showed a “concealed capacity bias,” whereby they reported believing that the robot could have concealed arms or legs, despite physical evidence to the contrary. This bias, referred to as the “Eve effect bias” reduced the N400 effect and amplified the perception of the robot, suggesting that individuals influenced by this bias may be less critical of the accuracy and plausibility of information provided by artificial agents. Consequently, humans may accept information from ASAs even when it contradicts common sense. These findings emphasize the need for transparency, unbiased information processing, and user education about the limitations and capabilities of ASAs.","PeriodicalId":382832,"journal":{"name":"2023 IEEE International Conference on Advanced Robotics and Its Social Impacts (ARSO)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133403763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Natural head and body orientation for humanoid robots during conversations with moving human partners through motion capture analysis","authors":"Pranav Barot, Ewen N. MacDonald, K. Mombaur","doi":"10.1109/ARSO56563.2023.10187462","DOIUrl":"https://doi.org/10.1109/ARSO56563.2023.10187462","url":null,"abstract":"In conversations between humans, a natural body and head orientation towards the interlocutor is important for their social interaction. Humanoids communicating with humans have to learn how to orient themselves properly which becomes a challenging task in the case of moving conversation partners. Studies of conversational behaviour often involve only stationary partners. In this research, we perform a motion capture study to address the scenario of moving subjects. Specifically, study trials were recorded during conversation between a human participant and interlocutor, with a focus on the behaviour of the head, shoulders, and feet. The results help better understand how humans behave while conversing with non-stationary interlocutors. The data from the trials was used to generate a mathematical model describing the relationship of the angle at which the interlocutor is located to the orientations of the head, shoulders and feet while tracking is performed. A new model setup to couple the motion of the interlocutor, the head and the shoulders is introduced, as well as a model to represent stepping in order to better replicate participant behaviour. The models are evaluated and then deployed to the REEM-C Humanoid Robot, for the purposes of generating a natural behavior of the robot and improving human-robot interaction.","PeriodicalId":382832,"journal":{"name":"2023 IEEE International Conference on Advanced Robotics and Its Social Impacts (ARSO)","volume":"226 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120966639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
N. Wiederhold, Mingjun Li, Nikolas Lamb, DiMaggio Paris, Alaina Tulskie, Sean Banerjee, N. Banerjee
{"title":"Studying How Object Handoff Orientations Relate to Subject Preferences on Handover","authors":"N. Wiederhold, Mingjun Li, Nikolas Lamb, DiMaggio Paris, Alaina Tulskie, Sean Banerjee, N. Banerjee","doi":"10.1109/ARSO56563.2023.10187566","DOIUrl":"https://doi.org/10.1109/ARSO56563.2023.10187566","url":null,"abstract":"Data collection involving human-human handover has provided enormous leaps in driving human-robot interaction research. However, all existing datasets lack information on giver and receiver preferences in handover interactions. Most previous studies have relied on small-scale human participant experiments involving a limited range of objects, where participants are often expected to share similar handover attitudes. Nevertheless, in real-world scenarios with diverse objects, it is likely that giver and receiver preferences will not always align. In this paper, we present a large-scale study of human-human handover behavior involving 96 participant dyads derived from 32 participants in total and 204 objects. Each dyad consists of 2 participants engaging in handovers, where after a giver-initiated handover, participants provide comfort ratings and binary responses indicating whether they agreed on the handover location. We also ask the receiver to demonstrate their preferred handover to gain detailed information on object pose at the handoff point. Our study captures 4-viewpoint RGB-D recordings of both giver-initiated forward handover and receiver-initiated demonstration handover. Using the collected data, we evaluate how the subjective ratings provided by participants correlate with objective measures of alignment of object orientation at handoff.","PeriodicalId":382832,"journal":{"name":"2023 IEEE International Conference on Advanced Robotics and Its Social Impacts (ARSO)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114140427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Perception of Human-Robot Collaboration Across Countries and Job Domains","authors":"Gurpreet Kaur, Sean Banerjee, N. Banerjee","doi":"10.1109/ARSO56563.2023.10187560","DOIUrl":"https://doi.org/10.1109/ARSO56563.2023.10187560","url":null,"abstract":"Understanding perceptions of blue-collar workers on safety, autonomy, and job security in collaborative human-robot environments is vital to ensure that fear and job displacement is minimized in the future. Perception and fear of robots is driven by culture, country, education level, position in the labor market, and minority status. Recent studies suggest that workers may develop positive views if robots are used to perform less desirable tasks, improve skills, and facilitate workplace safety. In this paper, we conduct a survey of worker perceptions towards robots of varying collaborative capabilitiesfully interventional or always assistive, fully standoff or never directly assistive, and assistive on an as-needed basis. We administer a questionnaire-based survey to blue-collar workers in 4 different countries, United States of America, Canada, United Kingdom, and Australia working in construction, contract work, manufacturing, retail, transportation and delivery, and warehousing. We received 530 successful responses in total from workers in all 4 countries and 6 job domains. To better understand whether perceptions of collaborative robots and human co-workers are universal or job and country-based, we break down our analysis based on the respondent reported job domain and country. We find perceptions of co-workers and robots to be job domain and country dependent, necessitating the need to develop robotic assistants with job domain and cultural awareness.","PeriodicalId":382832,"journal":{"name":"2023 IEEE International Conference on Advanced Robotics and Its Social Impacts (ARSO)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129305150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Federico Rollo, Andrea Zunino, G. Raiola, Fabio Amadio, A. Ajoudani, N. Tsagarakis
{"title":"FollowMe: a Robust Person Following Framework Based on Visual Re-Identification and Gestures","authors":"Federico Rollo, Andrea Zunino, G. Raiola, Fabio Amadio, A. Ajoudani, N. Tsagarakis","doi":"10.1109/ARSO56563.2023.10187536","DOIUrl":"https://doi.org/10.1109/ARSO56563.2023.10187536","url":null,"abstract":"Human-robot interaction (HRI) has become a crucial enabler in houses and industries for facilitating operational flexibility. When it comes to mobile collaborative robots, this flexibility can be further increased due to the autonomous mobility and navigation capacity of the robotic agents, expanding their workspace and consequently the personalizable assistance they can provide to the human operators. This however requires that the robot is capable of detecting and identifying the human counterpart in all stages of the collaborative task, and in particular while following a human in crowded workplaces. To respond to this need, we developed a unified perception and navigation framework, which enables the robot to identify and follow a target person using a combination of visual Re-Identification (Re-ID), hand gestures detection, and collisionfree navigation. The Re-Idmodule can autonomously learn the features of a target person and uses the acquired knowledge to visually re-identify the target. The navigation stack is used to follow the target avoiding obstacles and other individuals in the environment. Experiments are conducted with few subjects in a laboratory setting where some unknown dynamic obstacles are introduced.","PeriodicalId":382832,"journal":{"name":"2023 IEEE International Conference on Advanced Robotics and Its Social Impacts (ARSO)","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132846973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anne Rother, Gunther Notni, Alexander Hasse, Benjamin Noack, C. Beyer, Jan Reißmann, Chen Zhang, Marco Ragni, Julia C. Arlinghaus, M. Spiliopoulou
{"title":"Productive teaming under uncertainty: when a human and a machine classify objects together","authors":"Anne Rother, Gunther Notni, Alexander Hasse, Benjamin Noack, C. Beyer, Jan Reißmann, Chen Zhang, Marco Ragni, Julia C. Arlinghaus, M. Spiliopoulou","doi":"10.1109/ARSO56563.2023.10187430","DOIUrl":"https://doi.org/10.1109/ARSO56563.2023.10187430","url":null,"abstract":"We study the task of object categorization in an industrial setting. Typically, a machine classifies objects according to an internal, inferred model, and calls to a human worker if it is uncertain. However, the human worker may be also uncertain. We elaborate on the challenges and solutions to assess the certainty of the human without disturbing the industrial process, and to assess label reliability and human certainty in conventional object classification and crowdworking. Albeit there are methods for measuring stress, insights on the correlation of stress and uncertainty and uncertainty indicators during labeling by humans, these advances are yet to be combined to solve the aforementioned uncertainty challenge. We propose a solution as a sequence of tasks, starting with a experiment that measures human certainty in a task of controlled difficulty, whereupon we can associate certainty with correctness and levels of vital signals.","PeriodicalId":382832,"journal":{"name":"2023 IEEE International Conference on Advanced Robotics and Its Social Impacts (ARSO)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132998616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Esteban Centeno, Mijaíl Jaén Mendoza, Cesar F. Pinedo, Wangdo Kim, E. Roche, E. Vela
{"title":"Incremental PI-like Fuzzy Logic Control of a Vacuum-Powered Artificial Muscle for Soft Exoskeletons","authors":"Esteban Centeno, Mijaíl Jaén Mendoza, Cesar F. Pinedo, Wangdo Kim, E. Roche, E. Vela","doi":"10.1109/ARSO56563.2023.10187413","DOIUrl":"https://doi.org/10.1109/ARSO56563.2023.10187413","url":null,"abstract":"Newborns with lower limb motor deficiencies do need rehabilitation. Soft exoskeletons based on soft artificial muscles are promising for this purpose. However, the highly non-linear dynamic behavior of their system makes very difficult to obtain an accurate model to be controlled. This paper proposes to use a free-model method called Incremental PI-Like Fuzzy logic controller (PI-Like FLC). This variation facilitates the design and interpretation of the fuzzy rules; furthermore, we were capable of using optimization algorithms and fuzzy-C-means to obtain the fuzzy sets and membership functions based on data obtained from a previously tuned PID controller. The PI-Like FLC was physically implemented on infant dummies of 0 and 6 months of age to control the knee flexion-extension motion, and a PID was used as a comparison. The results demonstrate a successful tracking control and exhibit robustness against parametric uncertainties and disturbances for both dummies' motion without the need of tuning the controller.","PeriodicalId":382832,"journal":{"name":"2023 IEEE International Conference on Advanced Robotics and Its Social Impacts (ARSO)","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132240571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Recursive inverse dynamics of a swimming snake-like robot with a tree-like mechanical structure","authors":"Xiaowei Xie, J. Herault, V. Lebastard, F. Boyer","doi":"10.1109/ARSO56563.2023.10187577","DOIUrl":"https://doi.org/10.1109/ARSO56563.2023.10187577","url":null,"abstract":"In this paper, we report a recursive inverse dynamical model for a new snake-like robot called NATRIX. This robot has been designed to maintain its gaze on the water surface and monitor sensible ecosystems. Inspired by real snakes, the robot features rotating outer shells allowing to change the level of immersion of each module and re-stabilize quickly the robot. This new degree of freedom leads to an original tree-like geometric structure. We present here the theoretical model and the numerical solutions that allow us to simulate in real time the dynamics of the robot on the water surface. After reporting the benchmark of the simulator, we present surprising preliminary results suggesting the possibility of capsizing for a given frequency range.","PeriodicalId":382832,"journal":{"name":"2023 IEEE International Conference on Advanced Robotics and Its Social Impacts (ARSO)","volume":"310 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124750970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hristina Radak, Christian Scheunert, Giang T. Nguyen, Vu Nguyen, F. Fitzek
{"title":"Lightweight Generator of Synthetic IMU Sensor Data for Accurate AHRS Analysis","authors":"Hristina Radak, Christian Scheunert, Giang T. Nguyen, Vu Nguyen, F. Fitzek","doi":"10.1109/ARSO56563.2023.10187484","DOIUrl":"https://doi.org/10.1109/ARSO56563.2023.10187484","url":null,"abstract":"Accurate orientation estimation is crucial in many application areas, including unmanned ground and aerial navigation for industrial automation and human motion tracking for human-robot interaction. State-of-the-art techniques leverage Inertial Measurement Units (IMU) due to their small size, low energy footprint, and ever-increasing accuracy, which provide Magnetic, Angular Rate, and Gravity (MARG) sensor measurements. Available attitude determination techniques rely on advanced signal processing algorithms to compensate for the gyroscope integration drift. The comparison of different algorithms depends solely on the collected ground-truth data set, which is difficult to replicate. This paper introduces a lightweight software framework to generate synthetic IMU sensor data. We generate the ground-truth orientation of the sensor body frame and apply an inverse navigation process to obtain corresponding synthetic sensor data. Additionally, we compare two well-known orientation estimation algorithms applied to the synthetically generated data from our framework. Evaluation results demonstrate that the proposed software framework represents a fast and easy-to-use solution to the problem of evaluation of different orientation estimation algorithms while providing access to ground truth measurements.","PeriodicalId":382832,"journal":{"name":"2023 IEEE International Conference on Advanced Robotics and Its Social Impacts (ARSO)","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124006061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}