Frontiers in Robotics and AI最新文献

筛选
英文 中文
A survey of ontology-enabled processes for dependable robot autonomy 可靠机器人自主性本体化流程概览
Frontiers in Robotics and AI Pub Date : 2024-07-10 DOI: 10.3389/frobt.2024.1377897
Esther Aguado, Virgilio Gómez, Miguel Hernando, Claudio Rossi, Ricardo Sanz
{"title":"A survey of ontology-enabled processes for dependable robot autonomy","authors":"Esther Aguado, Virgilio Gómez, Miguel Hernando, Claudio Rossi, Ricardo Sanz","doi":"10.3389/frobt.2024.1377897","DOIUrl":"https://doi.org/10.3389/frobt.2024.1377897","url":null,"abstract":"Autonomous robots are already present in a variety of domains performing complex tasks. Their deployment in open-ended environments offers endless possibilities. However, there are still risks due to unresolved issues in dependability and trust. Knowledge representation and reasoning provide tools for handling explicit information, endowing systems with a deeper understanding of the situations they face. This article explores the use of declarative knowledge for autonomous robots to represent and reason about their environment, their designs, and the complex missions they accomplish. This information can be exploited at runtime by the robots themselves to adapt their structure or re-plan their actions to finish their mission goals, even in the presence of unexpected events. The primary focus of this article is to provide an overview of popular and recent research that uses knowledge-based approaches to increase robot autonomy. Specifically, the ontologies surveyed are related to the selection and arrangement of actions, representing concepts such as autonomy, planning, or behavior. Additionally, they may be related to overcoming contingencies with concepts such as fault or adapt. A systematic exploration is carried out to analyze the use of ontologies in autonomous robots, with the objective of facilitating the development of complex missions. Special attention is dedicated to examining how ontologies are leveraged in real time to ensure the successful completion of missions while aligning with user and owner expectations. The motivation of this analysis is to examine the potential of knowledge-driven approaches as a means to improve flexibility, explainability, and efficacy in autonomous robotic systems.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":"51 15","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141660070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Understanding consumer attitudes towards second-hand robots for the home 了解消费者对二手家用机器人的态度
Frontiers in Robotics and AI Pub Date : 2024-07-10 DOI: 10.3389/frobt.2024.1324519
Helen McGloin, Matthew Studley, Richard Mawle, A. Winfield
{"title":"Understanding consumer attitudes towards second-hand robots for the home","authors":"Helen McGloin, Matthew Studley, Richard Mawle, A. Winfield","doi":"10.3389/frobt.2024.1324519","DOIUrl":"https://doi.org/10.3389/frobt.2024.1324519","url":null,"abstract":"As robot numbers in the home increase, creating a market for second-hand robotic systems is essential to reduce the waste impact of the industry. Via a survey, consumer attitudes of United Kingdom participants towards second-hand robots were investigated; finding that second-hand robots with guarantees have an equal purchasing interest compared to new systems, highlighting the opportunity for manufacturers and retailers to develop certification standards for second-hand robots to move towards a circular economy. Consumer demographics also demonstrated that those most open to the purchase of both new and second-hand systems were women, those aged 18–25 years old, and those who have previously owned a robot for the home. Participants’ prior ownership of second-hand electronic devices (such as phones and laptops) did not affect rates of interest for second-hand robotic systems suggesting that the technology is still too new for people to be able to project their experience of current second-hand electronics to that of a robot. Additionally, this research found the robotics industry can consider the potential market for second-hand robots to be more similar to the second-hand smartphone market than to the household electronics market, and lessons learnt from the concerns raised by consumers for other internet-enabled electronic devices are similar to those concerns for second-hand robots. This provides an opportunity for the industry to break down the barriers for a circular economy earlier in the technology maturity process than has been seen for other electronics.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":"37 23","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141661753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editorial: Human-centered robot vision and artificial perception 社论:以人为本的机器人视觉和人工感知
Frontiers in Robotics and AI Pub Date : 2024-07-08 DOI: 10.3389/frobt.2024.1406280
Qing Gao, Xin Zhang, Chunwei Tian, Hongwei Gao, Zhaojie Ju
{"title":"Editorial: Human-centered robot vision and artificial perception","authors":"Qing Gao, Xin Zhang, Chunwei Tian, Hongwei Gao, Zhaojie Ju","doi":"10.3389/frobt.2024.1406280","DOIUrl":"https://doi.org/10.3389/frobt.2024.1406280","url":null,"abstract":"","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":" 23","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141668356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CEPB dataset: a photorealistic dataset to foster the research on bin picking in cluttered environments CEPB 数据集:促进杂乱环境中垃圾箱拣选研究的逼真数据集
Frontiers in Robotics and AI Pub Date : 2024-05-16 DOI: 10.3389/frobt.2024.1222465
P. Tripicchio, Salvatore D’Avella, C. Avizzano
{"title":"CEPB dataset: a photorealistic dataset to foster the research on bin picking in cluttered environments","authors":"P. Tripicchio, Salvatore D’Avella, C. Avizzano","doi":"10.3389/frobt.2024.1222465","DOIUrl":"https://doi.org/10.3389/frobt.2024.1222465","url":null,"abstract":"Several datasets have been proposed in the literature, focusing on object detection and pose estimation. The majority of them are interested in recognizing isolated objects or the pose of objects in well-organized scenarios. This work introduces a novel dataset that aims to stress vision algorithms in the difficult task of object detection and pose estimation in highly cluttered scenes concerning the specific case of bin picking for the Cluttered Environment Picking Benchmark (CEPB). The dataset provides about 1.5M virtually generated photo-realistic images (RGB + depth + normals + segmentation) of 50K annotated cluttered scenes mixing rigid, soft, and deformable objects of varying sizes used in existing robotic picking benchmarks together with their 3D models (40 objects). Such images include three different camera positions, three light conditions, and multiple High Dynamic Range Imaging (HDRI) maps for domain randomization purposes. The annotations contain the 2D and 3D bounding boxes of the involved objects, the centroids’ poses (translation + quaternion), and the visibility percentage of the objects’ surfaces. Nearly 10K separated object images are presented to perform simple tests and compare them with more complex cluttered scenarios tests. A baseline performed with the DOPE neural network is reported to highlight the challenges introduced by the novel dataset.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":"30 18","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140971346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editorial: Human-like robotic hands for biomedical applications and beyond 社论:生物医学应用及其他领域的仿人机器手
Frontiers in Robotics and AI Pub Date : 2024-05-13 DOI: 10.3389/frobt.2024.1414971
E. Secco, Y. Noh
{"title":"Editorial: Human-like robotic hands for biomedical applications and beyond","authors":"E. Secco, Y. Noh","doi":"10.3389/frobt.2024.1414971","DOIUrl":"https://doi.org/10.3389/frobt.2024.1414971","url":null,"abstract":"","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":"58 7","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140983476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A comparison of visual and auditory EEG interfaces for robot multi-stage task control 用于机器人多阶段任务控制的视觉和听觉脑电图界面比较
Frontiers in Robotics and AI Pub Date : 2024-05-09 DOI: 10.3389/frobt.2024.1329270
Kai Arulkumaran, Marina Di Vincenzo, Rousslan Fernand Julien Dossa, Shogo Akiyama, Dan Ogawa Lillrank, Motoshige Sato, Kenichi Tomeoka, Shuntaro Sasai
{"title":"A comparison of visual and auditory EEG interfaces for robot multi-stage task control","authors":"Kai Arulkumaran, Marina Di Vincenzo, Rousslan Fernand Julien Dossa, Shogo Akiyama, Dan Ogawa Lillrank, Motoshige Sato, Kenichi Tomeoka, Shuntaro Sasai","doi":"10.3389/frobt.2024.1329270","DOIUrl":"https://doi.org/10.3389/frobt.2024.1329270","url":null,"abstract":"Shared autonomy holds promise for assistive robotics, whereby physically-impaired people can direct robots to perform various tasks for them. However, a robot that is capable of many tasks also introduces many choices for the user, such as which object or location should be the target of interaction. In the context of non-invasive brain-computer interfaces for shared autonomy—most commonly electroencephalography-based—the two most common choices are to provide either auditory or visual stimuli to the user—each with their respective pros and cons. Using the oddball paradigm, we designed comparable auditory and visual interfaces to speak/display the choices to the user, and had users complete a multi-stage robotic manipulation task involving location and object selection. Users displayed differing competencies—and preferences—for the different interfaces, highlighting the importance of considering modalities outside of vision when constructing human-robot interfaces.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":" 11","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140994763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Quasi-steady aerodynamic modeling and dynamic stability of mosquito-inspired flapping wing pico aerial vehicle 受蚊子启发的拍打翼微型飞行器的准稳气动建模和动态稳定性
Frontiers in Robotics and AI Pub Date : 2024-05-07 DOI: 10.3389/frobt.2024.1362206
Balbir Singh, Kamarul Arifin Ahmad, Manikandan Murugaiah, N. Yidris, Adi Azriff Basri, Raghuvir Pai
{"title":"Quasi-steady aerodynamic modeling and dynamic stability of mosquito-inspired flapping wing pico aerial vehicle","authors":"Balbir Singh, Kamarul Arifin Ahmad, Manikandan Murugaiah, N. Yidris, Adi Azriff Basri, Raghuvir Pai","doi":"10.3389/frobt.2024.1362206","DOIUrl":"https://doi.org/10.3389/frobt.2024.1362206","url":null,"abstract":"Recent exploration in insect-inspired robotics has generated considerable interest. Among insects navigating at low Reynolds numbers, mosquitoes exhibit distinct flight characteristics, including higher wingbeat frequencies, reduced stroke amplitudes, and slender wings. This leads to unique aerodynamic traits such as trailing edge vortices via wake capture, diminished reliance on leading vortices, and rotational drag. This paper shows the energetic analysis of a mosquito-inspired flapping-wing Pico aerial vehicle during hovering, contributing insights to its future design and fabrication. The investigation relies on kinematic and quasi-steady aerodynamic modeling of a symmetric flapping-wing model with a wingspan of approximately 26 mm, considering translational, rotational, and wake capture force components. The control strategy adapts existing bird flapping wing approaches to accommodate insect wing kinematics and aerodynamic features. Flight controller design is grounded in understanding the impact of kinematics on wing forces. Additionally, a thorough analysis of the dynamic stability of the mosquito-inspired PAV model is conducted, revealing favorable controller response and maneuverability at a small scale. The modified model, incorporating rigid body dynamics and non-averaged aerodynamics, exhibits weak stability without a controller or sufficient power density. However, the controller effectively stabilizes the PAV model, addressing attitude and maneuverability. These preliminary findings offer valuable insights for the mechanical design, aerodynamics, and fabrication of RoboMos, an insect-inspired flapping wing pico aerial vehicle developed at UPM Malaysia.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":"175 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141002096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Autonomous ultrasound scanning robotic system based on human posture recognition and image servo control: an application for cardiac imaging 基于人体姿态识别和图像伺服控制的自主超声波扫描机器人系统:在心脏成像中的应用
Frontiers in Robotics and AI Pub Date : 2024-05-07 DOI: 10.3389/frobt.2024.1383732
Xiuhong Tang, Hongbo Wang, Jingjing Luo, Jinlei Jiang, Fan Nian, Lizhe Qi, Lingfeng Sang, Zhongxue Gan
{"title":"Autonomous ultrasound scanning robotic system based on human posture recognition and image servo control: an application for cardiac imaging","authors":"Xiuhong Tang, Hongbo Wang, Jingjing Luo, Jinlei Jiang, Fan Nian, Lizhe Qi, Lingfeng Sang, Zhongxue Gan","doi":"10.3389/frobt.2024.1383732","DOIUrl":"https://doi.org/10.3389/frobt.2024.1383732","url":null,"abstract":"In traditional cardiac ultrasound diagnostics, the process of planning scanning paths and adjusting the ultrasound window relies solely on the experience and intuition of the physician, a method that not only affects the efficiency and quality of cardiac imaging but also increases the workload for physicians. To overcome these challenges, this study introduces a robotic system designed for autonomous cardiac ultrasound scanning, with the goal of advancing both the degree of automation and the quality of imaging in cardiac ultrasound examinations. The system achieves autonomous functionality through two key stages: initially, in the autonomous path planning stage, it utilizes a camera posture adjustment method based on the human body’s central region and its planar normal vectors to achieve automatic adjustment of the camera’s positioning angle; precise segmentation of the human body point cloud is accomplished through efficient point cloud processing techniques, and precise localization of the region of interest (ROI) based on keypoints of the human body. Furthermore, by applying isometric path slicing and B-spline curve fitting techniques, it independently plans the scanning path and the initial position of the probe. Subsequently, in the autonomous scanning stage, an innovative servo control strategy based on cardiac image edge correction is introduced to optimize the quality of the cardiac ultrasound window, integrating position compensation through admittance control to enhance the stability of autonomous cardiac ultrasound imaging, thereby obtaining a detailed view of the heart’s structure and function. A series of experimental validations on human and cardiac models have assessed the system’s effectiveness and precision in the correction of camera pose, planning of scanning paths, and control of cardiac ultrasound imaging quality, demonstrating its significant potential for clinical ultrasound scanning applications.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":"102 11","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141003615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editorial: Understanding and engineering cyber-physical collectives 社论:了解网络物理集合体并对其进行工程设计
Frontiers in Robotics and AI Pub Date : 2024-05-06 DOI: 10.3389/frobt.2024.1407421
Roberto Casadei, Lukas Esterle, Rose Gamble, Paul Harvey, Elizabeth F. Wanner
{"title":"Editorial: Understanding and engineering cyber-physical collectives","authors":"Roberto Casadei, Lukas Esterle, Rose Gamble, Paul Harvey, Elizabeth F. Wanner","doi":"10.3389/frobt.2024.1407421","DOIUrl":"https://doi.org/10.3389/frobt.2024.1407421","url":null,"abstract":"","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":"33 20","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141010784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Computational kinematics of dance: distinguishing hip hop genres 舞蹈计算运动学:区分街舞流派
Frontiers in Robotics and AI Pub Date : 2024-05-02 DOI: 10.3389/frobt.2024.1295308
Ben Baker, Tony Liu, J. Matelsky, Felipe Parodi, Brett Mensh, John W. Krakauer, Konrad Kording
{"title":"Computational kinematics of dance: distinguishing hip hop genres","authors":"Ben Baker, Tony Liu, J. Matelsky, Felipe Parodi, Brett Mensh, John W. Krakauer, Konrad Kording","doi":"10.3389/frobt.2024.1295308","DOIUrl":"https://doi.org/10.3389/frobt.2024.1295308","url":null,"abstract":"Dance plays a vital role in human societies across time and culture, with different communities having invented different systems for artistic expression through movement (genres). Differences between genres can be described by experts in words and movements, but these descriptions can only be appreciated by people with certain background abilities. Existing dance notation schemes could be applied to describe genre-differences, however they fall substantially short of being able to capture the important details of movement across a wide spectrum of genres. Our knowledge and practice around dance would benefit from a general, quantitative and human-understandable method of characterizing meaningful differences between aspects of any dance style; a computational kinematics of dance. Here we introduce and apply a novel system for encoding bodily movement as 17 macroscopic, interpretable features, such as expandedness of the body or the frequency of sharp movements. We use this encoding to analyze Hip Hop Dance genres, in part by building a low-cost machine-learning classifier that distinguishes genre with high accuracy. Our study relies on an open dataset (AIST++) of pose-sequences from dancers instructed to perform one of ten Hip Hop genres, such as Breakdance, Popping, or Krump. For comparison we evaluate moderately experienced human observers at discerning these sequence’s genres from movements alone (38% where chance = 10%). The performance of a baseline, Ridge classifier model was fair (48%) and that of the model resulting from our automated machine learning pipeline was strong (76%). This indicates that the selected features represent important dimensions of movement for the expression of the attitudes, stories, and aesthetic values manifested in these dance forms. Our study offers a new window into significant relations of similarity and difference between the genres studied. Given the rich, complex, and culturally shaped nature of these genres, the interpretability of our features, and the lightweight techniques used, our approach has significant potential for generalization to other movement domains and movement-related applications.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":"75 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141021914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信