{"title":"Comparative analysis of creative problem solving tasks across age groups using modular cube robotics.","authors":"Mehedi Hasan Anik, Margarida Romero","doi":"10.3389/frobt.2024.1497511","DOIUrl":"10.3389/frobt.2024.1497511","url":null,"abstract":"<p><p>Creative Problem Solving (CPS) is an important competency when using digital artifacts for educational purposes. Using a dual-process approach, this study examines the divergent thinking scores (fluidity, flexibility, and originality) and problem-solving speed in CPS of different age groups. Participants engaged in CreaCube CPS tasks with educational robotics for two consecutive instances, with performance analyzed to explore the influence of prior experience and creative intentions. In the first instance, infants and children demonstrated greater originality compared to seniors, solving problems quickly but with less originality. In the second instance, teens, young adults, and seniors showed enhanced originality. The results highlight trends influenced by prior experience and creative intentions, emphasizing the need for customized instructions with modular robotics to improve CPS across the lifespan.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1497511"},"PeriodicalIF":2.9,"publicationDate":"2024-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11671365/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142903865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kieran Gilday, Dohyeon Pyeon, S Dhanush, Kyu-Jin Cho, Josie Hughes
{"title":"Exploiting passive behaviours for diverse musical playing using the parametric hand.","authors":"Kieran Gilday, Dohyeon Pyeon, S Dhanush, Kyu-Jin Cho, Josie Hughes","doi":"10.3389/frobt.2024.1463744","DOIUrl":"10.3389/frobt.2024.1463744","url":null,"abstract":"<p><p>Creativity and style in music playing originates from constraints and imperfect interactions between instruments and players. Digital and robotic systems have so far been unable to capture this naturalistic playing. Whether as an additional tool for musicians, function restoration with prosthetics, or artificial intelligence-powered systems, the physical embodiment and interactions generated are critical for expression and connection with an audience. We introduce the parametric hand, which serves as a platform to explore the generation of diverse interactions for the stylistic playing of both pianos and guitars. The hand's anatomical design and non-linear actuation are exploitable with simple kinematic modeling and synergistic actuation. This enables the modulation of two degrees of freedom for piano chord playing and guitar strumming with up to 6.6 times the variation in the signal amplitude. When only varying hand stiffness properties, we achieve capabilities similar to the variation exhibited in human strumming. Finally, we demonstrate the exploitability of behaviours with the rapid programming of posture and stiffness for sequential instrument playing, including guitar pick grasping. In summary, we highlight the utility of embodied intelligence in musical instrument playing through interactive behavioural diversity, as well as the ability to exploit behaviours over this diversity through designed behavioural robustness and synergistic actuation.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1463744"},"PeriodicalIF":2.9,"publicationDate":"2024-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11671752/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142903871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fostering children's creativity through LLM-driven storytelling with a social robot.","authors":"Maha Elgarf, Hanan Salam, Christopher Peters","doi":"10.3389/frobt.2024.1457429","DOIUrl":"10.3389/frobt.2024.1457429","url":null,"abstract":"<p><p>Creativity is an important skill that is known to plummet in children when they start school education that limits their freedom of expression and their imagination. On the other hand, research has shown that integrating social robots into educational settings has the potential to maximize children's learning outcomes. Therefore, our aim in this work was to investigate stimulating children's creativity through child-robot interactions. We fine-tuned a Large Language Model (LLM) to exhibit creative behavior and non-creative behavior in a robot and conducted two studies with children to evaluate the viability of our methods in fostering children's creativity skills. We evaluated creativity in terms of four metrics: fluency, flexibility, elaboration, and originality. We first conducted a study as a storytelling interaction between a child and a wizard-ed social robot in one of two conditions: creative versus non-creative with 38 children. We investigated whether interacting with a creative social robot will elicit more creativity from children. However, we did not find a significant effect of the robot's creativity on children's creative abilities. Second, in an attempt to increase the possibility for the robot to have an impact on children's creativity and to increase the fluidity of the interaction, we produced two models that allow a social agent to autonomously engage with a human in a storytelling context in a creative manner and a non-creative manner respectively. Finally, we conducted another study to evaluate our models by deploying them on a social robot and evaluating them with 103 children. Our results show that children who interacted with the creative autonomous robot were more creative than children who interacted with the non-creative autonomous robot in terms of the fluency, the flexibility, and the elaboration aspects of creativity. The results highlight the difference in children's learning performance when inetracting with a robot operated at different autonomy levels (Wizard of Oz versus autonoumous). Furthermore, they emphasize on the impact of designing adequate robot's behaviors on children's corresponding learning gains in child-robot interactions.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1457429"},"PeriodicalIF":2.9,"publicationDate":"2024-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11671368/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142903817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shakra Mehak, Inês F Ramos, Keerthi Sagar, Aswin Ramasubramanian, John D Kelleher, Michael Guilfoyle, Gabriele Gianini, Ernesto Damiani, Maria Chiara Leva
{"title":"A roadmap for improving data quality through standards for collaborative intelligence in human-robot applications.","authors":"Shakra Mehak, Inês F Ramos, Keerthi Sagar, Aswin Ramasubramanian, John D Kelleher, Michael Guilfoyle, Gabriele Gianini, Ernesto Damiani, Maria Chiara Leva","doi":"10.3389/frobt.2024.1434351","DOIUrl":"10.3389/frobt.2024.1434351","url":null,"abstract":"<p><p>Collaborative intelligence (CI) involves human-machine interactions and is deemed safety-critical because their reliable interactions are crucial in preventing severe injuries and environmental damage. As these applications become increasingly data-driven, the reliability of CI applications depends on the quality of data, shaping the system's ability to interpret and respond in diverse and often unpredictable environments. In this regard, it is important to adhere to data quality standards and guidelines, thus facilitating the advancement of these collaborative systems in industry. This study presents the challenges of data quality in CI applications within industrial environments, with two use cases that focus on the collection of data in Human-Robot Interaction (HRI). The first use case involves a framework for quantifying human and robot performance within the context of naturalistic robot learning, wherein humans teach robots using intuitive programming methods within the domain of HRI. The second use case presents real-time user state monitoring for adaptive multi-modal teleoperation, that allows for a dynamic adaptation of the system's interface, interaction modality and automation level based on user needs. The article proposes a hybrid standardization derived from established data quality-related ISO standards and addresses the unique challenges associated with multi-modal HRI data acquisition. The use cases presented in this study were carried out as part of an EU-funded project, Collaborative Intelligence for Safety-Critical Systems (CISC).</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1434351"},"PeriodicalIF":2.9,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11669550/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142899266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chengxu Zhou, Yuhui Wan, Christopher Peers, Andromachi Maria Delfaki, Dimitrios Kanoulas
{"title":"Advancing teleoperation for legged manipulation with wearable motion capture.","authors":"Chengxu Zhou, Yuhui Wan, Christopher Peers, Andromachi Maria Delfaki, Dimitrios Kanoulas","doi":"10.3389/frobt.2024.1430842","DOIUrl":"10.3389/frobt.2024.1430842","url":null,"abstract":"<p><p>The sanctity of human life mandates the replacement of individuals with robotic systems in the execution of hazardous tasks. Explosive Ordnance Disposal (EOD), a field fraught with mortal danger, stands at the forefront of this transition. In this study, we explore the potential of robotic telepresence as a safeguard for human operatives, drawing on the robust capabilities demonstrated by legged manipulators in diverse operational contexts. The challenge of autonomy in such precarious domains underscores the advantages of teleoperation-a harmonious blend of human intuition and robotic execution. Herein, we introduce a cost-effective telepresence and teleoperation system employing a legged manipulator, which combines a quadruped robot, an integrated manipulative arm, and RGB-D sensory capabilities. Our innovative approach tackles the intricate challenge of whole-body control for a quadrupedal manipulator. The core of our system is an IMU-based motion capture suit, enabling intuitive teleoperation, augmented by immersive visual telepresence via a VR headset. We have empirically validated our integrated system through rigorous real-world applications, focusing on loco-manipulation tasks that necessitate comprehensive robot control and enhanced visual telepresence for EOD operations.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1430842"},"PeriodicalIF":2.9,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11668679/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142899282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Can a human sing with an unseen artificial partner? Coordination dynamics when singing with an unseen human or artificial partner.","authors":"Rina Nishiyama, Tetsushi Nonaka","doi":"10.3389/frobt.2024.1463477","DOIUrl":"10.3389/frobt.2024.1463477","url":null,"abstract":"<p><p>This study investigated whether a singer's coordination patterns differ when singing with an unseen human partner versus an unseen artificial partner (VOCALOID 6 voice synthesis software). We used cross-correlation analysis to compare the correlation of the amplitude envelope time series between the partner's and the participant's singing voices. We also conducted a Granger causality test to determine whether the past amplitude envelope of the partner helps predict the future amplitude envelope of the participants, or if the reverse is true. We found more pronounced characteristics of anticipatory synchronization and increased similarity in the unfolding dynamics of the amplitude envelopes in the human-partner condition compared to the artificial-partner condition, despite the tempo fluctuations in the human-partner condition. The results suggested that subtle qualities of the human singing voice, possibly stemming from intrinsic dynamics of the human body, may contain information that enables human agents to align their singing behavior dynamics with a human partner.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1463477"},"PeriodicalIF":2.9,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11663750/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142883463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kyriacos Tsapparellas, Nickolay Jelev, Jonathon Waters, Aditya M Shrikhande, Sabine Brunswicker, Lyudmila S Mihaylova
{"title":"A versatile real-time vision-led runway localisation system for enhanced autonomy.","authors":"Kyriacos Tsapparellas, Nickolay Jelev, Jonathon Waters, Aditya M Shrikhande, Sabine Brunswicker, Lyudmila S Mihaylova","doi":"10.3389/frobt.2024.1490812","DOIUrl":"10.3389/frobt.2024.1490812","url":null,"abstract":"<p><p>This paper proposes a solution to the challenging task of autonomously landing Unmanned Aerial Vehicles (UAVs). An onboard computer vision module integrates the vision system with the ground control communication and video server connection. The vision platform performs feature extraction using the Speeded Up Robust Features (SURF), followed by fast Structured Forests edge detection and then smoothing with a Kalman filter for accurate runway sidelines prediction. A thorough evaluation is performed over real-world and simulation environments with respect to accuracy and processing time, in comparison with state-of-the-art edge detection approaches. The vision system is validated over videos with clear and difficult weather conditions, including with fog, varying lighting conditions and crosswind landing. The experiments are performed using data from the X-Plane 11 flight simulator and real flight data from the Uncrewed Low-cost TRAnsport (ULTRA) self-flying cargo UAV. The vision-led system can localise the runway sidelines with a Structured Forests approach with an accuracy approximately 84.4%, outperforming the state-of-the-art approaches and delivering real-time performance. The main contribution of this work consists of the developed vision-led system for runway detection to aid autonomous landing of UAVs using electro-optical cameras. Although implemented with the ULTRA UAV, the vision-led system is applicable to any other UAV.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1490812"},"PeriodicalIF":2.9,"publicationDate":"2024-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11660180/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142877889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xuedan Gao, Amit Rogel, Raghavasimhan Sankaranarayanan, Brody Dowling, Gil Weinberg
{"title":"Music, body, and machine: gesture-based synchronization in human-robot musical interaction.","authors":"Xuedan Gao, Amit Rogel, Raghavasimhan Sankaranarayanan, Brody Dowling, Gil Weinberg","doi":"10.3389/frobt.2024.1461615","DOIUrl":"10.3389/frobt.2024.1461615","url":null,"abstract":"<p><p>Musical performance relies on nonverbal cues for conveying information among musicians. Human musicians use bodily gestures to communicate their interpretation and intentions to their collaborators, from mood and expression to anticipatory cues regarding structure and tempo. Robotic Musicians can use their physical bodies in a similar way when interacting with fellow musicians. The paper presents a new theoretical framework to classify musical gestures and a study evaluating the effect of robotic gestures on synchronization between human musicians and Shimon - a robotic marimba player developed at Georgia Tech. Shimon utilizes head and arm movements to signify musical information such as expected notes, tempo, and beat. The study, in which piano players were asked to play along with Shimon, assessed the effectiveness of these gestures on human-robot synchronization. Subjects were evaluated for their ability to synchronize with unknown tempo changes as communicated by Shimon's ancillary and social gestures. The results demonstrate the significant contribution of non-instrumental gestures to human-robot synchronization, highlighting the importance of non-music-making gestures for anticipation and coordination in human-robot musical collaboration. Subjects also indicated more positive feelings when interacting with the robot's ancillary and social gestures, indicating the role of these gestures in supporting engaging and enjoyable musical experiences.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1461615"},"PeriodicalIF":2.9,"publicationDate":"2024-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11655300/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142865916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Takato Mizuho, Yuki Okafuji, Jun Baba, Takuji Narumi
{"title":"Multiple-agent promotion in a grocery store: effects of modality and variability of agents on customer memory.","authors":"Takato Mizuho, Yuki Okafuji, Jun Baba, Takuji Narumi","doi":"10.3389/frobt.2024.1397230","DOIUrl":"10.3389/frobt.2024.1397230","url":null,"abstract":"<p><p>The use of social robots for product advertising is becoming prevalent. Previous studies have demonstrated that social robots can positively impact <i>ad hoc</i> sales recommendations. However, the essential question of \"how effectively customers remember the advertised content\" remains unexplored. To address this gap, we conducted a field study where physical robots or virtual agents were stationed at two locations within a grocery store for product promotion. Based on prior research, we hypothesized that customers would exhibit better recall of promotional content when it is heard from different agents rather than the same agent. Moreover, we posited that customers would exhibit more favorable social attitudes toward physical robots than virtual agents, resulting in enhanced recall. The results did not support our hypotheses, as no significant differences were observed between the conditions. However, when the physical robot was used, we observed a significant positive correlation between subjective ratings such as social presence and recall performance. This trend was not evident when the virtual agent was used. This study is a stepping stone for future research evaluating agent-based product promotion in terms of customer memory.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1397230"},"PeriodicalIF":2.9,"publicationDate":"2024-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11655321/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142865915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"L-AVATeD: The lidar and visual walking terrain dataset.","authors":"David Whipps, Patrick Ippersiel, Philippe C Dixon","doi":"10.3389/frobt.2024.1384575","DOIUrl":"10.3389/frobt.2024.1384575","url":null,"abstract":"","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1384575"},"PeriodicalIF":2.9,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11653013/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142856117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}