2023 9th International Conference on Automation, Robotics and Applications (ICARA)最新文献

筛选
英文 中文
Kinematic Error Model for Trolley-type PUU 小车式PUU的运动误差模型
2023 9th International Conference on Automation, Robotics and Applications (ICARA) Pub Date : 2023-02-10 DOI: 10.1109/ICARA56516.2023.10125719
J. Ke, Y. J. Wang
{"title":"Kinematic Error Model for Trolley-type PUU","authors":"J. Ke, Y. J. Wang","doi":"10.1109/ICARA56516.2023.10125719","DOIUrl":"https://doi.org/10.1109/ICARA56516.2023.10125719","url":null,"abstract":"Because of the ability to move at high speed and the high stiffness, Parallel Kinematic Machines (PKMs) have been widely used in industry and research in recent years. In this paper, the kinematic error model was established for the trolley-type PUU (prismatic-universal-universal joint). Three translation errors were added to the ideal kinematic model present the joint position because of manufacturing and assembly. The calibration process included measuring the positioning error by a laser interferometer, identifying the error parameters using least square method and revised the kinematic model in the controller. After compensation, the positioning errors of end effector were indeed reduced. It could show the practicality of this model. Moreover, modifying the kinematics in the controller is a time-saving and convenient compensation method.","PeriodicalId":443572,"journal":{"name":"2023 9th International Conference on Automation, Robotics and Applications (ICARA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128749969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Software Model for Robot Programming and Example of Implementation for Navigation System 机器人编程软件模型及导航系统实现实例
2023 9th International Conference on Automation, Robotics and Applications (ICARA) Pub Date : 2023-02-10 DOI: 10.1109/ICARA56516.2023.10125856
Samira Chaychi, D. Zampuniéris, Sandro Reis
{"title":"Software Model for Robot Programming and Example of Implementation for Navigation System","authors":"Samira Chaychi, D. Zampuniéris, Sandro Reis","doi":"10.1109/ICARA56516.2023.10125856","DOIUrl":"https://doi.org/10.1109/ICARA56516.2023.10125856","url":null,"abstract":"In this paper, we are going to consider a current challenge in a robotic software system. We consider a problem, which is the lack of separation of concerns in robotic systems, and propose a software model to address the problem and resolve the current challenges. The core purpose of this paper is to demonstrate the advantages of using separation of concerns principles to create a well-ordered model of independent components that address separated concerns individually. Considering the problem, we developed a software model with the help of a proactive engine to address the challenges. We use robotic operating systems to help us to implement the robot simulator.","PeriodicalId":443572,"journal":{"name":"2023 9th International Conference on Automation, Robotics and Applications (ICARA)","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131643073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Classifying a Sensorimotor Skill of Periodontal Probing 牙周探诊感觉运动技能的分类
2023 9th International Conference on Automation, Robotics and Applications (ICARA) Pub Date : 2023-02-10 DOI: 10.1109/ICARA56516.2023.10125743
Vahan Babushkin, Muhammad Hassan Jamil, Dianne Sefo, P. Loomer, M. Eid
{"title":"Classifying a Sensorimotor Skill of Periodontal Probing","authors":"Vahan Babushkin, Muhammad Hassan Jamil, Dianne Sefo, P. Loomer, M. Eid","doi":"10.1109/ICARA56516.2023.10125743","DOIUrl":"https://doi.org/10.1109/ICARA56516.2023.10125743","url":null,"abstract":"Currently available dental simulators provide a wide range of visual, auditory, and haptic cues to play back the pre-recorded skill, however, they do not extract skill descriptors and do not attempt to model the skill. To ensure efficient communication of a sensorimotor skill, a model that captures the skill's main features and provides real-time feedback and guidance based on the user's expertise is desirable. To develop this model, a complex periodontal probing skill can be considered as a composition of primitives, that can be extracted from the recordings of several professionals performing the probing task. This model will be capable of evaluating the user's proficiency level to ensure adaptation and providing corresponding guidance and feedback. We developed a SVM model that characterizes the sensorimotor skill of periodontal probing by detecting the specific region of the tooth being probed. We explore the features affecting the accuracy of the model and provide a reduced feature set capable of capturing the regions with relatively high accuracy. Finally, we consider the problem of periodontal pocket detection. The SVM model trained to detect pockets was able to achieve a recall around 0.68. We discuss challenges associated with pocket detection and propose directions for future work.","PeriodicalId":443572,"journal":{"name":"2023 9th International Conference on Automation, Robotics and Applications (ICARA)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123339069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Obstacle-surmounting Analysis of a Novel Deformable Wheel-track Composite Walking Platform 一种新型可变形轮-履带复合行走平台的越障分析
2023 9th International Conference on Automation, Robotics and Applications (ICARA) Pub Date : 2023-02-10 DOI: 10.1109/ICARA56516.2023.10125846
Wang Kun, Liang Huawei
{"title":"Obstacle-surmounting Analysis of a Novel Deformable Wheel-track Composite Walking Platform","authors":"Wang Kun, Liang Huawei","doi":"10.1109/ICARA56516.2023.10125846","DOIUrl":"https://doi.org/10.1109/ICARA56516.2023.10125846","url":null,"abstract":"Based on high mobility requirements of unmanned ground platforms in various environments and combining the advantages of traditional wheeled, crawler and legged walking mechanisms, a novel deformable wheel-track composite walking mechanism with multi-movement modes is proposed,obtaining better obstacle-surmounting performances than the famous reconfigurable wheel-track mechanism(RWT type) in recent years. The walking mechanism has two stable forms of round wheel configuration and prolate polygon configuration with three modes of wheel rolling, track movement and gait rotation, which are respectively suitable for travelling on hard roads at high speeds, moving on soft earth with low contact pressure and crossing obstacles. Design of the innovative structure and a platform equipped with the novel machines is carried out. Subsequently, the platform's obstacle surmounting performance is analyzed through dynamic modelling and calculation. Ultimately to verify the advantages of the machine and its design rationality, comparative simulation experiments of obstacle-surmounting performances is implemented in the multi-body simulation environment,setting up obstacle models and building experimental platform models integrating this machine and the RWT type machine respectively.","PeriodicalId":443572,"journal":{"name":"2023 9th International Conference on Automation, Robotics and Applications (ICARA)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122885757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Oblivious Robots Performing Different Tasks on Grid Without Knowing Their Team Members 遗忘机器人在不认识其团队成员的情况下在网格上执行不同的任务
2023 9th International Conference on Automation, Robotics and Applications (ICARA) Pub Date : 2023-02-10 DOI: 10.1109/ICARA56516.2023.10125816
Satakshi Ghosh, Avisek Sharma, Pritam Goswami, B. Sau
{"title":"Oblivious Robots Performing Different Tasks on Grid Without Knowing Their Team Members","authors":"Satakshi Ghosh, Avisek Sharma, Pritam Goswami, B. Sau","doi":"10.1109/ICARA56516.2023.10125816","DOIUrl":"https://doi.org/10.1109/ICARA56516.2023.10125816","url":null,"abstract":"Two fundamental problems of distributed computing are Gathering and Arbitrary pattern formation (APF). These two tasks are different in nature as in gathering robots meet at a point but in Apfrobots form a fixed pattern in distinct positions. In most of the current literature on swarm robot algorithms, it is assumed that all robots in the system perform one single task together. Two teams of oblivious robots deployed in the same system and different teams of robots performing two different works simultaneously where no robot knows the team of another robot is a new concept in the literature introduced by Bhagat et al. [ICDCN'2020]. In this work, a swarm of silent and oblivious robots are deployed on an infinite grid under an asynchronous scheduler. The robots do not have access to any global coordinates. Some of the robots are given input of an arbitrary but unique pattern. The set of robots with the given pattern is assigned the task of forming the given pattern on the grid. The remaining robots are assigned with the task of gathering to a vertex of the grid (not fixed from earlier and not any point where a robot that is forming a pattern terminates). Each robot knows to which team it belongs, but can not recognize the team of another robot. Considering weak multiplicity detection, a distributed algorithm is presented in this paper which leads the robots with the input pattern into forming it and other robots into gathering on a vertex of the grid on which no other robot forming the pattern, terminates.","PeriodicalId":443572,"journal":{"name":"2023 9th International Conference on Automation, Robotics and Applications (ICARA)","volume":"200 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133794768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robot Locomotion Control Using Central Pattern Generator with Non-linear Bio-mimetic Neurons 基于非线性仿生神经元中枢模式发生器的机器人运动控制
2023 9th International Conference on Automation, Robotics and Applications (ICARA) Pub Date : 2023-02-10 DOI: 10.1109/ICARA56516.2023.10125666
V. Vivekanand, S. Hashemkhani, Shanmuga Venkatachalam, R. Kubendran
{"title":"Robot Locomotion Control Using Central Pattern Generator with Non-linear Bio-mimetic Neurons","authors":"V. Vivekanand, S. Hashemkhani, Shanmuga Venkatachalam, R. Kubendran","doi":"10.1109/ICARA56516.2023.10125666","DOIUrl":"https://doi.org/10.1109/ICARA56516.2023.10125666","url":null,"abstract":"Central pattern generators (CPG) generate rhythmic gait patterns that can be tuned to exhibit various locomotion behaviors like walking, trotting, etc. CPGs inspired by biology have been implemented previously in robotics to generate periodic motion patterns. This paper aims to take the inspiration even further to present a novel methodology to control movement of a four-legged robot using a non-linear bio-mimetic neuron model. In contrast to using regular leaky integrate and fire (LIF) neurons to create coupled neural networks, our design uses non-linear neurons constituting a mixed-feedback (positive and negative) control system operating at multiple timescales (fast, slow and ultraslow ranging from sub-ms to seconds), to generate a variety of spike patterns that control the robotic limbs and hence its gait. The use of spikes as motor control signals allows for low memory usage and low latency operation of the robot. Unlike LIF neurons, the bio-mimetic neurons are also jitter tolerant making the CPG network more resilient and robust to perturbations in the input stimulus. As a proof of concept, we implemented our model on the Petoi Bittle bot, a quadruped pet dog robot and were able to reliably observe different modes of locomotion-walk, trot and jump. Four bio-mimetic neurons forming a CPG network to control the four limbs were implemented on Arduino microcontroller and compared to a similar CPG built using four LIF neurons. The differential equations for both neurons were solved real-time on Arduino and profiled for memory usage, latency and jitter tolerance. The CPG using bio-mimetic non-linear neurons used marginally higher memory (378 bytes, 18% higher than LIF neurons), incurred insignificant latency of 3.54ms compared to motor activation delay of 200ms, while providing upto 5-10x higher jitter tolerance.","PeriodicalId":443572,"journal":{"name":"2023 9th International Conference on Automation, Robotics and Applications (ICARA)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115845021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Model-Based Approach for Robotics Education with Emphasis on Embedded Systems 基于模型的嵌入式系统机器人教育方法
2023 9th International Conference on Automation, Robotics and Applications (ICARA) Pub Date : 2023-02-10 DOI: 10.1109/ICARA56516.2023.10125977
R. Beneder, Patrick Schmitt, Clemens Környefalvy
{"title":"A Model-Based Approach for Robotics Education with Emphasis on Embedded Systems","authors":"R. Beneder, Patrick Schmitt, Clemens Környefalvy","doi":"10.1109/ICARA56516.2023.10125977","DOIUrl":"https://doi.org/10.1109/ICARA56516.2023.10125977","url":null,"abstract":"Typically, universities with a focus on technical sciences provide courses where students have to design control systems, to implement these control systems on embedded hardware and to verify the functionality of their implementations. Hence, the students work in groups and implement practical demonstrators based on given problem statements. In order to design, implement, and test robotic applications, it is mandatory to utilize expertise within the field of robotics and the field of embedded systems. The combination of expertise within both fields (robotics and embedded systems) is a highly demanded skill set, which is required to work for companies with focus on aviation, automotive, and even emerging applications for agricultural technology. The technical complexity of these applications is increasing almost exponentially, which requires abstract model-based approaches to ease the design flow of such implementations. This paper introduces a model-based approach for students within robotics and/or embedded systems degree programs. Moreover, this paper describes the state-of-the-art workflow to implement problem statements within the field of robotics and embedded systems (tools, approach and test), gives and overview of the model-based approach for students within these field of applications, and shows the integration of the results into courses based on a control system model.","PeriodicalId":443572,"journal":{"name":"2023 9th International Conference on Automation, Robotics and Applications (ICARA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129609608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Frame of Events: A Low-latency Resource-efficient Approach for Stereo Depth Maps 事件框架:立体深度图的低延迟资源效率方法
2023 9th International Conference on Automation, Robotics and Applications (ICARA) Pub Date : 2023-02-10 DOI: 10.1109/ICARA56516.2023.10125817
Shanmuga Venkatachalam, V. Vivekanand, R. Kubendran
{"title":"Frame of Events: A Low-latency Resource-efficient Approach for Stereo Depth Maps","authors":"Shanmuga Venkatachalam, V. Vivekanand, R. Kubendran","doi":"10.1109/ICARA56516.2023.10125817","DOIUrl":"https://doi.org/10.1109/ICARA56516.2023.10125817","url":null,"abstract":"Computer vision traditionally uses cameras that capture visual information as frames at periodic intervals. On the other hand, Dynamic Vision Sensors (DVS) capture temporal contrast (TC) in each pixel asynchronously and stream them serially. This paper proposes a hybrid approach to generate input visual data as ‘frame of events’ for a stereo vision pipeline. We demonstrate that using hybrid vision sensors that produce frames made up of TC events can achieve superior results in terms of low latency, less compute and low memory footprint as compared to the traditional cameras and the event-based DVS. The frame-of-events approach eliminates the latency and memory resources involved in the accumulation of asynchronous events into synchronous frames, while generating acceptable disparity maps for depth estimation. Benchmarking results show that the frame-of-events pipeline outperforms others with the least average latency per frame of 3.8 ms and least average memory usage per frame of 112.4 Kb, which amounts to 7.32% and 9.75% reduction when compared to traditional frame-based pipeline. Hence, the proposed method is suitable for missioncritical robotics applications that involve path planning and localization mapping in a resource-constrained environment, such as drone navigation and autonomous vehicles.","PeriodicalId":443572,"journal":{"name":"2023 9th International Conference on Automation, Robotics and Applications (ICARA)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125079776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Novel Framework for Ground Segmentation Using 3D Point Cloud 一种基于三维点云的地面分割框架
2023 9th International Conference on Automation, Robotics and Applications (ICARA) Pub Date : 2023-02-10 DOI: 10.1109/ICARA56516.2023.10126038
Xu Wang, Huachao Yu, Caixia Lu, Xueyan Liu, Xing Cui, Xijun Zhao, Bo Su
{"title":"A Novel Framework for Ground Segmentation Using 3D Point Cloud","authors":"Xu Wang, Huachao Yu, Caixia Lu, Xueyan Liu, Xing Cui, Xijun Zhao, Bo Su","doi":"10.1109/ICARA56516.2023.10126038","DOIUrl":"https://doi.org/10.1109/ICARA56516.2023.10126038","url":null,"abstract":"Ground segmentation is an essential preprocessing task for autonomous driving. Most existing 3D LiDAR-based ground segmentation methods segment the ground by fitting a ground model. However, these methods may fail to achieve ground segmentation in some challenging terrains, such as slope roads. In this paper, a novel framework is proposed to improve the performance of these methods. First, vertical points in the point cloud are filtered out by a gradient-based method. Second, a polar grid map is built to extract the seed points for model fitting. Moreover, the fitting-based method is used to model the ground. And a coarse segmentation result can be obtained by the fitted model. Next, the coarse segmentation result is used to update the ground height value for each grid in the grid map. Finally, the segmentation result is refined by the grid map. Experiments on the SemanticKITTI dataset have shown that the fitting-based method can achieve more accurate segmentation results by integrating with our proposed framework.","PeriodicalId":443572,"journal":{"name":"2023 9th International Conference on Automation, Robotics and Applications (ICARA)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125653971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Shiitake Mushroom Semantic Segmentation Method Based on Search Focus Network 基于搜索焦点网络的香菇语义分割方法
2023 9th International Conference on Automation, Robotics and Applications (ICARA) Pub Date : 2023-02-10 DOI: 10.1109/ICARA56516.2023.10125799
Juan Du, Songxuan Liu
{"title":"Shiitake Mushroom Semantic Segmentation Method Based on Search Focus Network","authors":"Juan Du, Songxuan Liu","doi":"10.1109/ICARA56516.2023.10125799","DOIUrl":"https://doi.org/10.1109/ICARA56516.2023.10125799","url":null,"abstract":"The substantially similar texture features of sticks and shiitake mushrooms in the mushroom-growing environment make precisely labeled samples more expensive and semantic segmentation of shiitake mushrooms more challenging. In this paper, a search focus network(SFNet) for semantic segmentation of shiitake mushrooms was proposed, which utilized the group-reversal attention module(GRAM) to strengthen semantic information understanding and trained via transfer learning and data augmentation strategies. The experimental results on the self-built shiitake mushroom sticks dataset revealed that structural measure $S_{alpha}$, weighted F-measure $F_{beta}^{omega}$, adaptive E-measure $E_{phi}^{ad}$, and absolute mean error $M$ of SFNet were 0.9161, 0.9113, 0.9808, and 0.0049, respectively, with practical and steady performance. With only a few training samples, the proposed approach can accomplish the semantic segmentation task of shiitake mushrooms.","PeriodicalId":443572,"journal":{"name":"2023 9th International Conference on Automation, Robotics and Applications (ICARA)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122220122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信