Frontiers in Robotics and AI最新文献

筛选
英文 中文
Collective predictive coding hypothesis: symbol emergence as decentralized Bayesian inference 集体预测编码假说:作为分散贝叶斯推理的符号出现
Frontiers in Robotics and AI Pub Date : 2024-07-23 DOI: 10.3389/frobt.2024.1353870
Tadahiro Taniguchi
{"title":"Collective predictive coding hypothesis: symbol emergence as decentralized Bayesian inference","authors":"Tadahiro Taniguchi","doi":"10.3389/frobt.2024.1353870","DOIUrl":"https://doi.org/10.3389/frobt.2024.1353870","url":null,"abstract":"Understanding the emergence of symbol systems, especially language, requires the construction of a computational model that reproduces both the developmental learning process in everyday life and the evolutionary dynamics of symbol emergence throughout history. This study introduces the collective predictive coding (CPC) hypothesis, which emphasizes and models the interdependence between forming internal representations through physical interactions with the environment and sharing and utilizing meanings through social semiotic interactions within a symbol emergence system. The total system dynamics is theorized from the perspective of predictive coding. The hypothesis draws inspiration from computational studies grounded in probabilistic generative models and language games, including the Metropolis–Hastings naming game. Thus, playing such games among agents in a distributed manner can be interpreted as a decentralized Bayesian inference of representations shared by a multi-agent system. Moreover, this study explores the potential link between the CPC hypothesis and the free-energy principle, positing that symbol emergence adheres to the society-wide free-energy principle. Furthermore, this paper provides a new explanation for why large language models appear to possess knowledge about the world based on experience, even though they have neither sensory organs nor bodies. This paper reviews past approaches to symbol emergence systems, offers a comprehensive survey of related prior studies, and presents a discussion on CPC-based generalizations. Future challenges and potential cross-disciplinary research avenues are highlighted.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":"131 36","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141811245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Adaptive satellite attitude control for varying masses using deep reinforcement learning 利用深度强化学习实现不同质量的自适应卫星姿态控制
Frontiers in Robotics and AI Pub Date : 2024-07-23 DOI: 10.3389/frobt.2024.1402846
Wiebke Retagne, Jonas Dauer, Günther Waxenegger-Wilfing
{"title":"Adaptive satellite attitude control for varying masses using deep reinforcement learning","authors":"Wiebke Retagne, Jonas Dauer, Günther Waxenegger-Wilfing","doi":"10.3389/frobt.2024.1402846","DOIUrl":"https://doi.org/10.3389/frobt.2024.1402846","url":null,"abstract":"Traditional spacecraft attitude control often relies heavily on the dimension and mass information of the spacecraft. In active debris removal scenarios, these characteristics cannot be known beforehand because the debris can take any shape or mass. Additionally, it is not possible to measure the mass of the combined system of satellite and debris object in orbit. Therefore, it is crucial to develop an adaptive satellite attitude control that can extract mass information about the satellite system from other measurements. The authors propose using deep reinforcement learning (DRL) algorithms, employing stacked observations to handle widely varying masses. The satellite is simulated in Basilisk software, and the control performance is assessed using Monte Carlo simulations. The results demonstrate the benefits of DRL with stacked observations compared to a classical proportional–integral–derivative (PID) controller for the spacecraft attitude control. The algorithm is able to adapt, especially in scenarios with changing physical properties.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":"89 10","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141812727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards reconciling usability and usefulness of policy explanations for sequential decision-making systems 努力协调顺序决策系统政策解释的可用性和实用性
Frontiers in Robotics and AI Pub Date : 2024-07-22 DOI: 10.3389/frobt.2024.1375490
Pradyumna Tambwekar, Matthew C. Gombolay
{"title":"Towards reconciling usability and usefulness of policy explanations for sequential decision-making systems","authors":"Pradyumna Tambwekar, Matthew C. Gombolay","doi":"10.3389/frobt.2024.1375490","DOIUrl":"https://doi.org/10.3389/frobt.2024.1375490","url":null,"abstract":"Safefy-critical domains often employ autonomous agents which follow a sequential decision-making setup, whereby the agent follows a policy to dictate the appropriate action at each step. AI-practitioners often employ reinforcement learning algorithms to allow an agent to find the best policy. However, sequential systems often lack clear and immediate signs of wrong actions, with consequences visible only in hindsight, making it difficult to humans to understand system failure. In reinforcement learning, this is referred to as the credit assignment problem. To effectively collaborate with an autonomous system, particularly in a safety-critical setting, explanations should enable a user to better understand the policy of the agent and predict system behavior so that users are cognizant of potential failures and these failures can be diagnosed and mitigated. However, humans are diverse and have innate biases or preferences which may enhance or impair the utility of a policy explanation of a sequential agent. Therefore, in this paper, we designed and conducted human-subjects experiment to identify the factors which influence the perceived usability with the objective usefulness of policy explanations for reinforcement learning agents in a sequential setting. Our study had two factors: the modality of policy explanation shown to the user (Tree, Text, Modified Text, and Programs) and the “first impression” of the agent, i.e., whether the user saw the agent succeed or fail in the introductory calibration video. Our findings characterize a preference-performance tradeoff wherein participants perceived language-based policy explanations to be significantly more useable; however, participants were better able to objectively predict the agent’s behavior when provided an explanation in the form of a decision tree. Our results demonstrate that user-specific factors, such as computer science experience (p < 0.05), and situational factors, such as watching agent crash (p < 0.05), can significantly impact the perception and usefulness of the explanation. This research provides key insights to alleviate prevalent issues regarding innapropriate compliance and reliance, which are exponentially more detrimental in safety-critical settings, providing a path forward for XAI developers for future work on policy-explanations.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":"27 12","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141814739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semantic learning from keyframe demonstration using object attribute constraints 利用对象属性约束从关键帧演示中进行语义学习
Frontiers in Robotics and AI Pub Date : 2024-07-18 DOI: 10.3389/frobt.2024.1340334
Busra Sen, Jos Elfring, Elena Torta, René van de Molengraft
{"title":"Semantic learning from keyframe demonstration using object attribute constraints","authors":"Busra Sen, Jos Elfring, Elena Torta, René van de Molengraft","doi":"10.3389/frobt.2024.1340334","DOIUrl":"https://doi.org/10.3389/frobt.2024.1340334","url":null,"abstract":"Learning from demonstration is an approach that allows users to personalize a robot’s tasks. While demonstrations often focus on conveying the robot’s motion or task plans, they can also communicate user intentions through object attributes in manipulation tasks. For instance, users might want to teach a robot to sort fruits and vegetables into separate boxes or to place cups next to plates of matching colors. This paper introduces a novel method that enables robots to learn the semantics of user demonstrations, with a particular emphasis on the relationships between object attributes. In our approach, users demonstrate essential task steps by manually guiding the robot through the necessary sequence of poses. We reduce the amount of data by utilizing only robot poses instead of trajectories, allowing us to focus on the task’s goals, specifically the objects related to these goals. At each step, known as a keyframe, we record the end-effector pose, object poses, and object attributes. However, the number of keyframes saved in each demonstration can vary due to the user’s decisions. This variability in each demonstration can lead to inconsistencies in the significance of keyframes, complicating keyframe alignment to generalize the robot’s motion and the user’s intention. Our method addresses this issue by focusing on teaching the higher-level goals of the task using only the required keyframes and relevant objects. It aims to teach the rationale behind object selection for a task and generalize this reasoning to environments with previously unseen objects. We validate our proposed method by conducting three manipulation tasks aiming at different object attribute constraints. In the reproduction phase, we demonstrate that even when the robot encounters previously unseen objects, it can generalize the user’s intention and execute the task.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":" 100","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141825332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Gaze detection as a social cue to initiate natural human-robot collaboration in an assembly task 将目光检测作为社交线索,在装配任务中启动自然的人机协作
Frontiers in Robotics and AI Pub Date : 2024-07-17 DOI: 10.3389/frobt.2024.1394379
Matteo Lavit Nicora, Pooja Prajod, Marta Mondellini, Giovanni Tauro, Rocco Vertechy, Elisabeth André, Matteo Malosio
{"title":"Gaze detection as a social cue to initiate natural human-robot collaboration in an assembly task","authors":"Matteo Lavit Nicora, Pooja Prajod, Marta Mondellini, Giovanni Tauro, Rocco Vertechy, Elisabeth André, Matteo Malosio","doi":"10.3389/frobt.2024.1394379","DOIUrl":"https://doi.org/10.3389/frobt.2024.1394379","url":null,"abstract":"Introduction: In this work we explore a potential approach to improve human-robot collaboration experience by adapting cobot behavior based on natural cues from the operator.Methods: Inspired by the literature on human-human interactions, we conducted a wizard-of-oz study to examine whether a gaze towards the cobot can serve as a trigger for initiating joint activities in collaborative sessions. In this study, 37 participants engaged in an assembly task while their gaze behavior was analyzed. We employed a gaze-based attention recognition model to identify when the participants look at the cobot.Results: Our results indicate that in most cases (83.74%), the joint activity is preceded by a gaze towards the cobot. Furthermore, during the entire assembly cycle, the participants tend to look at the cobot mostly around the time of the joint activity. Given the above results, a fully integrated system triggering joint action only when the gaze is directed towards the cobot was piloted with 10 volunteers, of which one characterized by high-functioning Autism Spectrum Disorder. Even though they had never interacted with the robot and did not know about the gaze-based triggering system, most of them successfully collaborated with the cobot and reported a smooth and natural interaction experience.Discussion: To the best of our knowledge, this is the first study to analyze the natural gaze behavior of participants working on a joint activity with a robot during a collaborative assembly task and to attempt the full integration of an automated gaze-based triggering system.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":" 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141831398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Distributed safe formation tracking control of multiquadcopter systems using barrier Lyapunov function 利用屏障 Lyapunov 函数实现多架四旋翼飞行器系统的分布式安全编队跟踪控制
Frontiers in Robotics and AI Pub Date : 2024-07-15 DOI: 10.3389/frobt.2024.1370104
Nargess Sadeghzadeh-Nokhodberiz, Mohammad Reza Sadeghi, Rohollah Barzamini, Allahyar Montazeri
{"title":"Distributed safe formation tracking control of multiquadcopter systems using barrier Lyapunov function","authors":"Nargess Sadeghzadeh-Nokhodberiz, Mohammad Reza Sadeghi, Rohollah Barzamini, Allahyar Montazeri","doi":"10.3389/frobt.2024.1370104","DOIUrl":"https://doi.org/10.3389/frobt.2024.1370104","url":null,"abstract":"Coordinating the movements of a robotic fleet using consensus-based techniques is an important problem in achieving the desired goal of a specific task. Although most available techniques developed for consensus-based control ignore the collision of robots in the transient phase, they are either computationally expensive or cannot be applied in environments with dynamic obstacles. Therefore, we propose a new distributed collision-free formation tracking control scheme for multiquadcopter systems by exploiting the properties of the barrier Lyapunov function (BLF). Accordingly, the problem is formulated in a backstepping setting, and a distributed control law that guarantees collision-free formation tracking of the quads is derived. In other words, the problems of both tracking and interagent collision avoidance with a predefined accuracy are formulated using the proposed BLF for position subsystems, and the controllers are designed through augmentation of a quadratic Lyapunov function. Owing to the underactuated nature of the quadcopter system, virtual control inputs are considered for the translational (x and y axes) subsystems that are then used to generate the desired values for the roll and pitch angles for the attitude control subsystem. This provides a hierarchical controller structure for each quadcopter. The attitude controller is designed for each quadcopter locally by taking into account a predetermined error limit by another BLF. Finally, simulation results from the MATLAB-Simulink environment are provided to show the accuracy of the proposed method. A numerical comparison with an optimization-based technique is also provided to prove the superiority of the proposed method in terms of the computational cost, steady-state error, and response time.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":"33 35","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141645440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing emotional expression in cat-like robots: strategies for utilizing tail movements with human-like gazes 增强仿猫机器人的情感表达:利用尾巴运动与人类相似目光的策略
Frontiers in Robotics and AI Pub Date : 2024-07-15 DOI: 10.3389/frobt.2024.1399012
Xinxiang Wang, Zihan Li, Songyang Wang, Yiming Yang, Yibo Peng, Changzeng Fu
{"title":"Enhancing emotional expression in cat-like robots: strategies for utilizing tail movements with human-like gazes","authors":"Xinxiang Wang, Zihan Li, Songyang Wang, Yiming Yang, Yibo Peng, Changzeng Fu","doi":"10.3389/frobt.2024.1399012","DOIUrl":"https://doi.org/10.3389/frobt.2024.1399012","url":null,"abstract":"In recent years, there has been a significant growth in research on emotion expression in the field of human-robot interaction. In the process of human-robot interaction, the effect of the robot’s emotional expression determines the user’s experience and acceptance. Gaze is widely accepted as an important media to express emotions in human-human interaction. But it has been found that users have difficulty in effectively recognizing emotions such as happiness and anger expressed by animaloid robots that use eye contact individually. In addition, in real interaction, effective nonverbal expression includes not only eye contact but also physical expression. However, current animaloid social robots consider human-like eyes as the main emotion expression pathway, which results in a dysfunctional robot appearance and behavioral approach, affecting the quality of emotional expression. Based on retaining the effectiveness of eyes for emotional communication, we added a mechanical tail as a physical expression to enhance the robot’s emotional expression in concert with the eyes. The results show that the collaboration between the mechanical tail and the bionic eye enhances emotional expression in all four emotions. Further more, we found that the mechanical tail can enhance the expression of specific emotions with different parameters. The above study is conducive to enhancing the robot’s emotional expression ability in human-robot interaction and improving the user’s interaction experience.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":"28 22","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141647992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Buoyant force learning through a visuo-haptic environment: a case study 通过视觉-触觉环境加强浮力学习:案例研究
Frontiers in Robotics and AI Pub Date : 2024-07-12 DOI: 10.3389/frobt.2024.1276027
L. Neri, J. Noguez, David Escobar-Castillejos, Víctor Robledo-Rella, R. García-Castelán, Andres González-Nucamendi, Alejandra J. Magana, Bedrich Benes
{"title":"Enhancing Buoyant force learning through a visuo-haptic environment: a case study","authors":"L. Neri, J. Noguez, David Escobar-Castillejos, Víctor Robledo-Rella, R. García-Castelán, Andres González-Nucamendi, Alejandra J. Magana, Bedrich Benes","doi":"10.3389/frobt.2024.1276027","DOIUrl":"https://doi.org/10.3389/frobt.2024.1276027","url":null,"abstract":"Introduction: This study aimed to develop, implement, and test a visuo-haptic simulator designed to explore the buoyancy phenomenon for freshman engineering students enrolled in physics courses. The primary goal was to enhance students’ understanding of physical concepts through an immersive learning tool.Methods: The visuo-haptic simulator was created using the VIS-HAPT methodology, which provides high-quality visualization and reduces development time. A total of 182 undergraduate students were randomly assigned to either an experimental group that used the simulator or a control group that received an equivalent learning experience in terms of duration and content. Data were collected through pre- and post-tests and an exit-perception questionnaire.Results: Data analysis revealed that the experimental group achieved higher learning gains than the control group (p = 0.079). Additionally, students in the experimental group expressed strong enthusiasm for the simulator, noting its positive impact on their understanding of physical concepts. The VIS-HAPT methodology also reduced the average development time compared to similar visuo-haptic simulators.Discussion: The results demonstrate the efficacy of the buoyancy visuo-haptic simulator in improving students’ learning experiences and validate the utility of the VIS-HAPT method for creating immersive educational tools in physics.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":"10 12","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141654016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluation of a passive wearable arm ExoNET 评估被动式可穿戴手臂 ExoNET
Frontiers in Robotics and AI Pub Date : 2024-07-10 DOI: 10.3389/frobt.2024.1387177
P. Ryali, Valentino Wilson, C. Celian, Adith V. Srivatsa, Yaseen Ghani, Jeremy Lentz, James L. Patton
{"title":"Evaluation of a passive wearable arm ExoNET","authors":"P. Ryali, Valentino Wilson, C. Celian, Adith V. Srivatsa, Yaseen Ghani, Jeremy Lentz, James L. Patton","doi":"10.3389/frobt.2024.1387177","DOIUrl":"https://doi.org/10.3389/frobt.2024.1387177","url":null,"abstract":"Wearable ExoNETs offer a novel, wearable solution to support and facilitate upper extremity gravity compensation in healthy, unimpaired individuals. In this study, we investigated the safety and feasibility of gravity compensating ExoNETs on 10 healthy, unimpaired individuals across a series of tasks, including activities of daily living and resistance exercises. The direct muscle activity and kinematic effects of gravity compensation were compared to a sham control and no device control. Mixed effects analysis revealed significant reductions in muscle activity at the biceps, triceps and medial deltoids with effect sizes of −3.6%, −4.5%, and −7.2% rmsMVC, respectively, during gravity support. There were no significant changes in movement kinematics as evidenced by minimal change in coverage metrics at the wrist. These findings reveal the potential for the ExoNET to serve as an alternative to existing bulky and encumbering devices in post-stroke rehabilitation settings and pave the way for future clinical trials.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":"7 11","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141660201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robotont 3–an accessible 3D-printable ROS-supported open-source mobile robot for education and research Robotont 3--用于教育和研究的无障碍 3D 可打印 ROS 支持的开源移动机器人
Frontiers in Robotics and AI Pub Date : 2024-07-10 DOI: 10.3389/frobt.2024.1406645
Eva Mõtshärg, V. Vunder, Renno Raudmäe, Marko Muro, Ingvar Drikkit, Leonid Tšigrinski, Raimo Köidam, A. Aabloo, Karl Kruusamäe
{"title":"Robotont 3–an accessible 3D-printable ROS-supported open-source mobile robot for education and research","authors":"Eva Mõtshärg, V. Vunder, Renno Raudmäe, Marko Muro, Ingvar Drikkit, Leonid Tšigrinski, Raimo Köidam, A. Aabloo, Karl Kruusamäe","doi":"10.3389/frobt.2024.1406645","DOIUrl":"https://doi.org/10.3389/frobt.2024.1406645","url":null,"abstract":"Educational robots offer a platform for training aspiring engineers and building trust in technology that is envisioned to shape how we work and live. In education, accessibility and modularity are significant in the choice of such a technological platform. In order to foster continuous development of the robots as well as to improve student engagement in the design and fabrication process, safe production methods with low accessibility barriers should be chosen. In this paper, we present Robotont 3, an open-source mobile robot that leverages Fused Deposition Modeling (FDM) 3D-printing for manufacturing the chassis and a single dedicated system board that can be ordered from online printed circuit board (PCB) assembly services. To promote accessibility, the project follows open hardware practices, such as design transparency, permissive licensing, accessibility in manufacturing methods, and comprehensive documentation. Semantic Versioning was incorporated to improve maintainability in development. Compared to the earlier versions, Robotont 3 maintains all the technical capabilities, while featuring an improved hardware setup to enhance the ease of fabrication and assembly, and modularity. The improvements increase the accessibility, scalability and flexibility of the platform in an educational setting.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":"22 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141659528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信