2022 IEEE International Conference on Robotics and Biomimetics (ROBIO)最新文献

筛选
英文 中文
Full Utilization of a Single Image by Characterizing Multiple Regions of Interest for Line Tracing 充分利用单幅图像特征的多个感兴趣的区域线跟踪
2022 IEEE International Conference on Robotics and Biomimetics (ROBIO) Pub Date : 2022-12-05 DOI: 10.1109/ROBIO55434.2022.10011795
Jinsung Ahn, Y. Yamakawa
{"title":"Full Utilization of a Single Image by Characterizing Multiple Regions of Interest for Line Tracing","authors":"Jinsung Ahn, Y. Yamakawa","doi":"10.1109/ROBIO55434.2022.10011795","DOIUrl":"https://doi.org/10.1109/ROBIO55434.2022.10011795","url":null,"abstract":"This paper presents a new method of image processing for the line tracing task, which is one of the simple and fundamental tasks that has been applied to an unmanned system, utilizing multiple regions of interest to draw information from the entire image which was discarded in traditional image processing method for more accurate and flexible line trace. This new method divides the acquired image by machine vision into 3 regions: feedback region, prediction region, and inspection region. And different process was applied to each region to acquire parameters depending on the characteristics of each region that can enhance line tracing performance. In this paper, parameters of the new method are applied to the proportional control method and implemented to the robot arm and the camera and evaluated with the basic proportional control by comparing adaptability to a sharp curve. Consequently, the new method provided more adaptability in line tracing compared to the traditional single region of interest method.","PeriodicalId":151112,"journal":{"name":"2022 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115232115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Stair-Climbing Robot with Star-wheel Configuration 星轮结构的爬楼梯机器人
2022 IEEE International Conference on Robotics and Biomimetics (ROBIO) Pub Date : 2022-12-05 DOI: 10.1109/ROBIO55434.2022.10011928
Tongxin Cui, Wenhui Wang, Zheng Zhu, Jing Wu, Zhenzhong Jia
{"title":"A Stair-Climbing Robot with Star-wheel Configuration","authors":"Tongxin Cui, Wenhui Wang, Zheng Zhu, Jing Wu, Zhenzhong Jia","doi":"10.1109/ROBIO55434.2022.10011928","DOIUrl":"https://doi.org/10.1109/ROBIO55434.2022.10011928","url":null,"abstract":"This paper proposes a four-wheel mobile robot that uses passive star-wheel configuration to climb stairs. Without adding any control complexity, the robot can climb a standard indoor staircase of 15 x 28cm at a speed of 0.7 $s$ / step. For stairs of other sizes, it also has a certain adaptability, avoiding the problem of slipping when using active star-wheel to climb stairs. This paper analyzes the obstacle-surmounting conditions of the four-star-wheel robot during stair climbing, and uses statics to calculate the driving torque. We use simulation to validate the torque consumption study, and prove the stability of the robot's central trajectory during stair climbing. These results provide a basis to quantify the robot's stair-climbing capability under certain load. We build an robot prototype platform and conduct physical experimentation to validate the robot performance.","PeriodicalId":151112,"journal":{"name":"2022 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"975 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123077849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design and Experiments of a Robotic Arm with a Rigid-Soft Coupling Structure* 一种具有刚软耦合结构的机械臂设计与实验*
2022 IEEE International Conference on Robotics and Biomimetics (ROBIO) Pub Date : 2022-12-05 DOI: 10.1109/ROBIO55434.2022.10011683
W. He, Tao Wang
{"title":"Design and Experiments of a Robotic Arm with a Rigid-Soft Coupling Structure*","authors":"W. He, Tao Wang","doi":"10.1109/ROBIO55434.2022.10011683","DOIUrl":"https://doi.org/10.1109/ROBIO55434.2022.10011683","url":null,"abstract":"Aiming at the insufficient load capacity of a soft robot, a robotic arm based on rigid-soft coupling structure was proposed. It is composed of fluidic soft actuators external installed and a rigid skeleton in central. The stiffness variation of the skeleton is realized by using jamming principle. The rigid skeleton is a Y-shaped linkage mechanism, which can realize bending and elongation simultaneously. This coupling structure has the same degrees of freedom in comparison with the original three-chamber soft robotic arm without the rigid skeleton. A prototype of the robotic arm as well as an experimental setup were developed. Load capacity experiments and dynamic response experiments of the robotic arm were implemented respectively. The results verify that the proposed rigid-soft coupling robotic arm is superior to the skeletonless soft robotic arm in terms of load-carrying capacity and dynamic response performance.","PeriodicalId":151112,"journal":{"name":"2022 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"273 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127333126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RBT-HCI: A Reliable Behavior Tree Planning Method with Human-Computer Interaction RBT-HCI:一种可靠的人机交互行为树规划方法
2022 IEEE International Conference on Robotics and Biomimetics (ROBIO) Pub Date : 2022-12-05 DOI: 10.1109/ROBIO55434.2022.10011651
Yunlong Wu, Jinghua Li, Haoxiang Jin, Jiexin Zhang, Yanzhen Wang
{"title":"RBT-HCI: A Reliable Behavior Tree Planning Method with Human-Computer Interaction","authors":"Yunlong Wu, Jinghua Li, Haoxiang Jin, Jiexin Zhang, Yanzhen Wang","doi":"10.1109/ROBIO55434.2022.10011651","DOIUrl":"https://doi.org/10.1109/ROBIO55434.2022.10011651","url":null,"abstract":"In this paper, we propose RBT-HCI, a reliable behavior tree (BT) planning method with human-computer interaction, aiming at generating an interpretable and human-acceptable BT. Compared with other BT generation methods, RBT-HCI can reliably plan a BT based on the knowledge base. When an available BT cannot be planned automatically, instead of terminating or relaxing the rules, RBT-HCI provides a new idea, which is to make decisions through human-computer interaction, thereby enhancing the reliability and robustness of the method. The effectiveness of RBT-HCI is verified by an example of robot grasping objects, showing that a reliable and robust planning result can be obtained through knowledge-based automatic planning and human-computer interaction.","PeriodicalId":151112,"journal":{"name":"2022 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125123443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Visuotactile Feedback Parallel Gripper for Robotic Adaptive Grasping 机器人自适应抓取的视触觉反馈平行夹持器
2022 IEEE International Conference on Robotics and Biomimetics (ROBIO) Pub Date : 2022-12-05 DOI: 10.1109/ROBIO55434.2022.10011873
Boyue Zhang, Shaowei Cui, C. Zhang, Jingyi Hu, Shuo Wang
{"title":"Visuotactile Feedback Parallel Gripper for Robotic Adaptive Grasping","authors":"Boyue Zhang, Shaowei Cui, C. Zhang, Jingyi Hu, Shuo Wang","doi":"10.1109/ROBIO55434.2022.10011873","DOIUrl":"https://doi.org/10.1109/ROBIO55434.2022.10011873","url":null,"abstract":"In robot grasp and dexterous manipulation tasks, tactile sensing is important for the control adjustment of the manipulator. In this paper, we present a novel low-cost parallel gripper with high-resolution tactile sensing, named the GelStereo Gripper. Furthermore, an adaptive grasp strategy is proposed to endow the gripper with tactile-feedback grasp stability-maintaining ability. We install the gripper on our robot platform and conduct various grasp experiments by utilizing proposed control methods. Experimental results verify the reliability of the GelStereo gripper and also prove the effectiveness of the proposed strategy for experimental objects with different features.","PeriodicalId":151112,"journal":{"name":"2022 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125174560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Optimized Fingering Planning for Automatic Piano Playing Using Dual-arm Robot System 利用双臂机器人系统优化钢琴自动弹奏指法规划
2022 IEEE International Conference on Robotics and Biomimetics (ROBIO) Pub Date : 2022-12-05 DOI: 10.1109/ROBIO55434.2022.10011881
Wenjing Shi, Yihui Li, Y. Guan, Xiaohan Chen, Shengtian Yang, Senyu Mo
{"title":"Optimized Fingering Planning for Automatic Piano Playing Using Dual-arm Robot System","authors":"Wenjing Shi, Yihui Li, Y. Guan, Xiaohan Chen, Shengtian Yang, Senyu Mo","doi":"10.1109/ROBIO55434.2022.10011881","DOIUrl":"https://doi.org/10.1109/ROBIO55434.2022.10011881","url":null,"abstract":"Robots can be used to play musical instruments such as the piano, but existing robots are not automated enough for this purpose. In this paper, we design an automatic finger and arm planning system for dual-arm robots, whose input object is a commonly used bi-phonic musical score with high-pitched and low-pitched voices. The digital score can be obtained from methods in the field of Optical Score Recognition. We combine the characteristics of the score and the robot to automatically generate fingering using a Dynamic Planning approach. Then we generate the movements of the robot's arms based on the annotated fingering. Finally, it is demonstrated by simulated experiments. Our method is more applicable to robots than general fingering generation algorithms.","PeriodicalId":151112,"journal":{"name":"2022 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"223 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125832515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Precise LiDAR SLAM in Structured Scene Using Finite Plane and Prior Constraint 基于有限平面和先验约束的结构场景精确激光雷达SLAM
2022 IEEE International Conference on Robotics and Biomimetics (ROBIO) Pub Date : 2022-12-05 DOI: 10.1109/ROBIO55434.2022.10011847
Yuhui Xie, Wentao Zhao, Jiahao Wang, Jingchuan Wang, Weidong Chen
{"title":"Precise LiDAR SLAM in Structured Scene Using Finite Plane and Prior Constraint","authors":"Yuhui Xie, Wentao Zhao, Jiahao Wang, Jingchuan Wang, Weidong Chen","doi":"10.1109/ROBIO55434.2022.10011847","DOIUrl":"https://doi.org/10.1109/ROBIO55434.2022.10011847","url":null,"abstract":"In this paper, we propose a precise LiDAR SLAM in optimization framework using plane-like object as landmark. Compared to general methods, finite plane feature is used to represent landmark and a new residual model is designed, making constraint from edge of landmark can be used to limit the parallel position between LiDAR and landmark, leading to a more accurate result. Moreover, floor plan is used to provide global pose for reduction of drift, and additionally feature orientation prior is used to prevent map distortion when updating the inaccurate part of floor plan. Experiments are conducted using data collected in real environment. The result qualitatively shows that proposed method can build a corrected map from floor plan without distortion, and quantitatively verifies that proposed method can outperform other baseline methods in accuracy.","PeriodicalId":151112,"journal":{"name":"2022 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126650035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Channel-distribution Hybrid Deep Learning for sEMG-based Gesture Recognition 基于表面肌电信号的手势识别的通道分布混合深度学习
2022 IEEE International Conference on Robotics and Biomimetics (ROBIO) Pub Date : 2022-12-05 DOI: 10.1109/ROBIO55434.2022.10011951
Keyi Lu, Hao Guo, Fei Qi, Peihao Gong, Zhihao Gu, Lining Sun, Haibo Huang
{"title":"Channel-distribution Hybrid Deep Learning for sEMG-based Gesture Recognition","authors":"Keyi Lu, Hao Guo, Fei Qi, Peihao Gong, Zhihao Gu, Lining Sun, Haibo Huang","doi":"10.1109/ROBIO55434.2022.10011951","DOIUrl":"https://doi.org/10.1109/ROBIO55434.2022.10011951","url":null,"abstract":"In recent years, CNNs (Convolutional Neural Networks) with their powerful feature representation and feature learning capabilities, have played an important role in gesture recognition tasks based on sparse multichannel surface EMG signals. As each muscle group in the upper limb plays a different role in a particular hand movement, we propose a hybrid CNN model that considers the spatial distribution of muscle groups in the myoelectric channel to improve the accuracy of hand gesture recognition. The model takes the spectrogram of CWT (Continuous Wavelet Transform) as input, based on the spatial distribution of channels, decomposes all channels into multiple input streams, lets the CNN learn the features of each stream separately, and gradually fuses (slowly fusion) the features learned by each stream, and then performs gesture classification. Finally, the results of several of these stream-division methods are fused for decision making to obtain classification accuracies. The proposed model was validated and tested on the Nina Pro DB4 dataset, and the average accuracy was improved compared to both traditional machine learning methods and multi-stream CNN models that do not take into account the spatial distribution of channels.","PeriodicalId":151112,"journal":{"name":"2022 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126956445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Speech-Vision Based Multi-Modal AI Control of a Magnetic Anchored and Actuated Endoscope 基于语音视觉的磁锚定驱动内窥镜多模态人工智能控制
2022 IEEE International Conference on Robotics and Biomimetics (ROBIO) Pub Date : 2022-12-05 DOI: 10.1109/ROBIO55434.2022.10011904
Jixiu Li, Yisen Huang, W. Ng, T. Cheng, Xixin Wu, Q. Dou, Helen M. Meng, P. Heng, Yunhui Liu, S. Chan, D. Navarro-Alarcon, Calvin Sze Hang Ng, Philip Wai Yan Chiu, Zheng Li
{"title":"Speech-Vision Based Multi-Modal AI Control of a Magnetic Anchored and Actuated Endoscope","authors":"Jixiu Li, Yisen Huang, W. Ng, T. Cheng, Xixin Wu, Q. Dou, Helen M. Meng, P. Heng, Yunhui Liu, S. Chan, D. Navarro-Alarcon, Calvin Sze Hang Ng, Philip Wai Yan Chiu, Zheng Li","doi":"10.1109/ROBIO55434.2022.10011904","DOIUrl":"https://doi.org/10.1109/ROBIO55434.2022.10011904","url":null,"abstract":"In minimally invasive surgery (MIS), controlling the endoscope view is crucial for the operation. Many robotic endoscope holders were developed aiming to address this prob-lem,. These systems rely on joystick, foot pedal, simple voice command, etc. to control the robot. These methods requires surgeons extra effort and are not intuitive enough. In this paper, we propose a speech-vision based multi-modal AI approach, which integrates deep learning based instrument detection, automatic speech recognition and robot visual servo control. Surgeons could communicate with the endoscope by speech to indicate their view preference, such as the instrument to be tracked. The instrument is detected by the deep learning neural network. Then the endoscope takes the detected instrument as the target and follows it with the visual servo controller. This method is applied to a magnetic anchored and guided endoscope and evaluated experimentally. Preliminary results demonstrated this approach is effective and requires little efforts for the surgeon to control the endoscope view intuitively.","PeriodicalId":151112,"journal":{"name":"2022 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124699025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design and Kinematic Modeling of a Concentric Torsionally-Steerable Flexible Surgical Robot 同心扭控柔性手术机器人的设计与运动学建模
2022 IEEE International Conference on Robotics and Biomimetics (ROBIO) Pub Date : 2022-12-05 DOI: 10.1109/ROBIO55434.2022.10011976
Ning Zhang, Yixuan Kong, Hailin Huang, Shuang Song, Bing Li
{"title":"Design and Kinematic Modeling of a Concentric Torsionally-Steerable Flexible Surgical Robot","authors":"Ning Zhang, Yixuan Kong, Hailin Huang, Shuang Song, Bing Li","doi":"10.1109/ROBIO55434.2022.10011976","DOIUrl":"https://doi.org/10.1109/ROBIO55434.2022.10011976","url":null,"abstract":"Flexible robots with torsional motion can be used in transnasal surgery which can improve the flexibility of the end-effector during the operation. However, the torsion or twisting function still can not satisfy the requirement of the complex cavities with a different structure. This paper proposed a concentric torsionally-steer able (CTS) flexible surgical robot with novel concentric tendon-driven tubes. A 2L-RPRPR model based on the rigidized equivalence model is established to guide the spatial motion of the CTS robot. On the basis of this model, the cooperative motion between the inner and outer tubes can be realized, such as linear movement and rotation. In the meanwhile, the concentric tendon-driven tubes can perform different bending directions and curvatures according to various cavities. And the C-shape or S-shape with different curvatures required by the surgery operation can also be achieved. The results of the simulation and experiments show that the proposed CTS robot has larger workspace and higher operational flexibility, which are sufficient for surgical operation.","PeriodicalId":151112,"journal":{"name":"2022 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129918384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信