{"title":"DenseXFormer: An Effective Occluded Human Instance Segmentation Network based on Density Map for Nursing Robot","authors":"Sihao Qi, Jiexin Xie, Haitao Yan, Shijie Guo","doi":"10.1109/ROBIO58561.2023.10354873","DOIUrl":"https://doi.org/10.1109/ROBIO58561.2023.10354873","url":null,"abstract":"Human instance segmentation in occlusion scenarios remains a challenging task, especially in nursing scenarios, which hinders the development of nursing robots. Existing approaches are unable to focus the network’s attention on the occluded areas, which leads to unsatisfactory results. To address this issue, this paper proposes a novel and effective network based on density map in the instance segmentation task. Density map-based neural networks perform well in cases where human bodies occlude each other and can be trained without additional annotation information. Firstly, a density map generator (DMG) is introduced to generate accurate density information from feature maps computed by the backbone. Secondly, using density map enhances features in the density fusion module (DFM), which focuses the network on high-density areas as well as occluded areas. Additionally, to remedy the lack of occlusion-based dataset of nursing instance segmentation, a new dataset NSR-dataset is proposed. A large amount experiments on the public datasets (NSR and COCO-PersonOcc) show that the proposed method can be a powerful instrument for human instance segmentation. The improvements of efficiency with respect to accuracy are both prominent. The dataset can be got at https://github.com/Monkey0806/NSR-dataset.","PeriodicalId":505134,"journal":{"name":"2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"44 4","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139187378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wenhao Liu, Wanlei Li, Tao Wang, Jun He, Yunjiang Lou
{"title":"Real-Time RGB-D Pedestrian Tracking for Mobile Robot","authors":"Wenhao Liu, Wanlei Li, Tao Wang, Jun He, Yunjiang Lou","doi":"10.1109/ROBIO58561.2023.10354856","DOIUrl":"https://doi.org/10.1109/ROBIO58561.2023.10354856","url":null,"abstract":"Pedestrian tracking is an important research direction in the field of mobile robotics. In order to complete tasks more efficiently and without hindering the original intention of pedestrians, mobile robots need to track pedestrians accurately in real time. In this paper, we propose a real-time RGB-D pedestrian tracking framework. First, we propose a pedestrian segmentation detection algorithm to detect pedestrians and obtain their two-dimensional positions. Second, due to limited computational resources and the rarity of missed detection for pedestrians, we use an nearest neighbor tracker for pedestrian tracking. To address the issue of inaccurate pedestrian localization, we use our detection algorithm to obtain the center of pedestrians from RGB images. By combining them with point clouds, the 2D coordinates of pedestrians are obtained. Our method enables accurate pedestrian tracking in the world coordinate, by adaptively fusing RGB images with their corresponding depth-based point clouds. Besides, our light-weight detection and tracking algorithm guarantee the real-time pedestrian tracking for realistic mobile robot applications. To validate the effectiveness and real-time performance of tracking algorithm, we conduct experiments using multiple pedestrian datasets of approximately half a minute in length, captured from two different perspectives. To validate the practicality and accuracy of the tracking algorithm in real-world scenarios, we extend our tracking algorithm to apply it to trajectory prediction.","PeriodicalId":505134,"journal":{"name":"2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"43 3","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139187405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Feature Fusion Module Based on Gate Mechanism for Object Detection","authors":"Zepeng Sun, Dongyin Jin, Jian Deng, Mengyang Zhang, Zhenzhou Shao","doi":"10.1109/ROBIO58561.2023.10354575","DOIUrl":"https://doi.org/10.1109/ROBIO58561.2023.10354575","url":null,"abstract":"In recent years, deep learning based feature fusion has drawn significant attention in the field of information integration due to its robust representational and generative capabilities. However, existing methods struggle to effectively preserve essential information. To this end, this paper proposes a gate-based fusion module for object detection to integrate the information from distinct feature layers of convolutional neural networks. The gate structure of the fusion module adaptively selects features from neighboring layers, storing valuable information in memory units and passing it to the subsequent layer. This approach facilitates the fusion of high-level semantic and low-level detailed features. Experimental validation is conducted on the public Pascal VOC dataset. Experiments results demonstrate that the addition of the gate-based fusion module to the detection task leads to an average accuracy increment of up to 5%.","PeriodicalId":505134,"journal":{"name":"2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"17 6","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139186713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fog-based Distributed Camera Network system for Surveillance Applications","authors":"Mvs Sakethram, Ps Saikrishna","doi":"10.1109/ROBIO58561.2023.10355008","DOIUrl":"https://doi.org/10.1109/ROBIO58561.2023.10355008","url":null,"abstract":"The Internet of Things (IoT) refers to a network of interconnected physical devices embedded with sensors, software, and network connectivity that enables them to collect and exchange data. Cloud computing refers to the delivery of computing resources and services over the Internet. The time it takes for IoT data to transit to the cloud and back might have a substantial influence on the performance, especially for applications that need low latency. Fog computing has been proposed for this constraint. Many issues need to be resolved in order to fully utilize the real-time analytics capabilities of the fog and IoT paradigms. In this paper, we worked extensively using a simulator called iFogsim, to model IoT and Fog environments with real-world challenges and discussed mainly the data transmission between the fog nodes. We describe a case study and added constraints that make the a realistic fog environment with a Distributed Camera Network System (DCNS).","PeriodicalId":505134,"journal":{"name":"2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"111 12","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139186721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Shape Analysis and Control of a Continuum Objects*","authors":"Yuqiao Dai, Peng Li, Shilin Zhang, Yunhui Liu","doi":"10.1109/ROBIO58561.2023.10354616","DOIUrl":"https://doi.org/10.1109/ROBIO58561.2023.10354616","url":null,"abstract":"Soft robots are a hot spot in today's robotic research, because most of them exist in the form of continuums, and the current continuum is difficult to recognize the shape and reproduce the corresponding shape. In this paper, we propose a method, in which the shape features of the flexible continuum are obtained by contour centerline extraction and binocular camera reconstruction and the modeling of the relationship between the motor input and the shape output of the continuum is completed using neural networks. Simulation environment is set up to test the shape estimation and shape control of the flexible continuum. Results show that this method can prediction and reproduce the shape of the continuum well. This method can be used in shape control of the continuum robot.","PeriodicalId":505134,"journal":{"name":"2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"111 7","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139186724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dan Xiong, Yiyong Huang, Yanjie Yang, Hongwei Liu, Zhijie Jiang, Wei Han
{"title":"Research on Horizontal Following Control of a Suspended Robot for Self-Momentum Targets","authors":"Dan Xiong, Yiyong Huang, Yanjie Yang, Hongwei Liu, Zhijie Jiang, Wei Han","doi":"10.1109/ROBIO58561.2023.10354971","DOIUrl":"https://doi.org/10.1109/ROBIO58561.2023.10354971","url":null,"abstract":"Micro/low gravity is one of the most prominent features of the outer space environment, and it significantly alters the force state and dynamics of spacecraft or astronauts compared to the Earth’s gravitational environment. It is crucial to simulate the micro/low gravity environment on the ground for astronaut training or spacecraft testing. The suspension method utilizes a pulley and sling mechanism to create a micro-low gravity environment. This method counteracts the gravitational force exerted by the object based on rope tension. The simulation effect greatly depends on the accuracy of the horizontal following system, which serves as the central subsystem of the suspension device. In this paper, we propose a dual-arm following system to solve the issue of horizontal following for self-momentum targets. In addition, we conduct research on adaptive inhibition technology for flexible rope swing, and coupling control between a robotic arm and a crane. Physical experiments are conducted on the robotic system to verify the effectiveness of the proposed approach.","PeriodicalId":505134,"journal":{"name":"2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"88 12","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139186736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Visual Servoing Using Cosine Similarity Metric","authors":"Wenbo Ning, Yecan Yin, Xiangfei Li, Huan Zhao, Yunfeng Fu, Han Ding","doi":"10.1109/ROBIO58561.2023.10354973","DOIUrl":"https://doi.org/10.1109/ROBIO58561.2023.10354973","url":null,"abstract":"This article presents a new visual servoing method based on cosine similarity metric, which focuses on utilizing cosine distance defined by cosine similarity as the optimization objective of histogram-based direct visual servoing (HDVS) to design the servoing control law. As a more compact global descriptor, the histogram makes direct visual servoing more robust against noise than directly using image intensity. Cosine similarity is the cosine value between two vectors, which has been widely employed to calculate the similarity between multidimensional information. The cosine distance derived from the cosine similarity is more sensitive to the directional difference between the histograms, making the proposed method have a larger convergence rate than the existing Matusita distance-based servoing method. This advantage is verified by simulations, and experiments are conducted on a manipulator to further verify the effectiveness of the proposed method in practical situations.","PeriodicalId":505134,"journal":{"name":"2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"70 9","pages":"1-7"},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139187090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jinsong. Zhang, Deling. Wang, Huadan. Hao, Liangwen. Yan
{"title":"The Enhanced Network Swin-T by CNN on Flow Pattern Recognition for Two-phase Image Dataset with Low Similarity","authors":"Jinsong. Zhang, Deling. Wang, Huadan. Hao, Liangwen. Yan","doi":"10.1109/ROBIO58561.2023.10354651","DOIUrl":"https://doi.org/10.1109/ROBIO58561.2023.10354651","url":null,"abstract":"In the two-phase flow experiments with different conditions of materials and process parameters, the collected image dataset with the low similarity and small amount was difficult for the common deep learning algorithms to achieve a high-precision recognition of flow pattern. due to the low extraction capability of global features. In this article, we proposed a new deep learning algorithm to enhance Swin-T network by CNN which combined the advantages of Swin-T network with Dynamic Region-Aware Convolution. The new algorithm retained the window multi-head self-attention mechanism and added the self-attention adjustment module to enhance the extraction of image features and the convergence speed of network. It significantly improved the recognition accuracy of the different flow patterns in the sharp and blurred images. The enhanced network Swin-T by CNN had the high applicability to the classification of image dataset with low similarity and small amount.","PeriodicalId":505134,"journal":{"name":"2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"59 3","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139187097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qiang Fu, Muxuan Han, Yunjiang Lou, Ke Li, Zhiyuan Yu
{"title":"Inertia Estimation of Quadruped Robot under Load and Its Walking Control Strategy in Urban Complex Terrain","authors":"Qiang Fu, Muxuan Han, Yunjiang Lou, Ke Li, Zhiyuan Yu","doi":"10.1109/ROBIO58561.2023.10354861","DOIUrl":"https://doi.org/10.1109/ROBIO58561.2023.10354861","url":null,"abstract":"When the quadruped robot is engaged in logistics transportation tasks, it encounters a challenge where the distribution of the center of mass (CoM) of the loaded items is not only random but also subject to time variations. Consequently, the robot becomes susceptible to non-zero resultant torques, which inevitably impact its body posture during the walking process. This paper proposes a method to estimate the CoM inertia using four one-dimensional force sensors and a walking control strategy for complex urban terrain. The inertia tensor and CoM of the load are first estimated, then the robot’s dynamics are compensated, and foothold adjustments are made for underactuated orientations to compensate for the extra moment generated by the CoM offset. For uneven terrain, the terrain estimator and event-based gait are used to adjust the robot’s gait to reduce the impact of terrain changes on the robot. The effectiveness of the proposed method and the feasibility of load walking in urban terrain are verified through comparative experiments, complex terrain load walking experiments in Webots, and real prototype experiments.","PeriodicalId":505134,"journal":{"name":"2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"69 11","pages":"1-7"},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139187104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Uncertainty in Bayesian Reinforcement Learning for Robot Manipulation Tasks with Sparse Rewards","authors":"Li Zheng, Yanghong Li, Yahao Wang, Guangrui Bai, Haiyang He, Erbao Dong","doi":"10.1109/ROBIO58561.2023.10354785","DOIUrl":"https://doi.org/10.1109/ROBIO58561.2023.10354785","url":null,"abstract":"This paper aims to explore the application of Bayesian deep reinforcement learning (BDRL) in robot manipulation tasks with sparse rewards, focusing on addressing the uncertainty in complex and sparsely rewarded environments. Conventional deep reinforcement learning (DRL) algorithms still face significant challenges in the context of robot manipulation tasks. To address this issue, this paper proposes a general algorithm framework called BDRL that combines reinforcement learning algorithms with Bayesian networks to quantify the model uncertainty, aleatoric uncertainty in neural networks, and uncertainty in the reward function. The effectiveness and generality of the proposed algorithm are validated through simulation experiments on multiple sets of different sparsely rewarded tasks, employing various advanced DRL algorithms. The research results demonstrate that the DRL algorithm based on the Bayesian network mechanism significantly improves the convergence speed of the algorithms in sparse reward tasks by accurately estimating the model uncertainty.","PeriodicalId":505134,"journal":{"name":"2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"48 6","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139187288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}