{"title":"动态环境中受生物启发的视觉SLAM:基于实例分割和点-线特征融合的IPL-SLAM。","authors":"Jian Liu, Donghao Yao, Na Liu, Ye Yuan","doi":"10.3390/biomimetics10090558","DOIUrl":null,"url":null,"abstract":"<p><p>Simultaneous Localization and Mapping (SLAM) is a fundamental technique in mobile robotics, enabling autonomous navigation and environmental reconstruction. However, dynamic elements in real-world scenes-such as walking pedestrians, moving vehicles, and swinging doors-often degrade SLAM performance by introducing unreliable features that cause localization errors. In this paper, we define dynamic regions as areas in the scene containing moving objects, and dynamic features as the visual features extracted from these regions that may adversely affect localization accuracy. Inspired by biological perception strategies that integrate semantic awareness and geometric cues, we propose Instance-level Point-Line SLAM (IPL-SLAM), a robust visual SLAM framework for dynamic environments. The system employs YOLOv8-based instance segmentation to detect potential dynamic regions and construct semantic priors, while simultaneously extracting point and line features using Oriented FAST (Features from Accelerated Segment Test) and Rotated BRIEF (Binary Robust Independent Elementary Features), collectively known as ORB, and Line Segment Detector (LSD) algorithms. Motion consistency checks and angular deviation analysis are applied to filter dynamic features, and pose optimization is conducted using an adaptive-weight error function. A static semantic point cloud map is further constructed to enhance scene understanding. Experimental results on the TUM RGB-D dataset demonstrate that IPL-SLAM significantly outperforms existing dynamic SLAM systems-including DS-SLAM and ORB-SLAM2-in terms of trajectory accuracy and robustness in complex indoor environments.</p>","PeriodicalId":8907,"journal":{"name":"Biomimetics","volume":"10 9","pages":""},"PeriodicalIF":3.9000,"publicationDate":"2025-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12467914/pdf/","citationCount":"0","resultStr":"{\"title\":\"Towards Biologically-Inspired Visual SLAM in Dynamic Environments: IPL-SLAM with Instance Segmentation and Point-Line Feature Fusion.\",\"authors\":\"Jian Liu, Donghao Yao, Na Liu, Ye Yuan\",\"doi\":\"10.3390/biomimetics10090558\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Simultaneous Localization and Mapping (SLAM) is a fundamental technique in mobile robotics, enabling autonomous navigation and environmental reconstruction. However, dynamic elements in real-world scenes-such as walking pedestrians, moving vehicles, and swinging doors-often degrade SLAM performance by introducing unreliable features that cause localization errors. In this paper, we define dynamic regions as areas in the scene containing moving objects, and dynamic features as the visual features extracted from these regions that may adversely affect localization accuracy. Inspired by biological perception strategies that integrate semantic awareness and geometric cues, we propose Instance-level Point-Line SLAM (IPL-SLAM), a robust visual SLAM framework for dynamic environments. The system employs YOLOv8-based instance segmentation to detect potential dynamic regions and construct semantic priors, while simultaneously extracting point and line features using Oriented FAST (Features from Accelerated Segment Test) and Rotated BRIEF (Binary Robust Independent Elementary Features), collectively known as ORB, and Line Segment Detector (LSD) algorithms. Motion consistency checks and angular deviation analysis are applied to filter dynamic features, and pose optimization is conducted using an adaptive-weight error function. A static semantic point cloud map is further constructed to enhance scene understanding. Experimental results on the TUM RGB-D dataset demonstrate that IPL-SLAM significantly outperforms existing dynamic SLAM systems-including DS-SLAM and ORB-SLAM2-in terms of trajectory accuracy and robustness in complex indoor environments.</p>\",\"PeriodicalId\":8907,\"journal\":{\"name\":\"Biomimetics\",\"volume\":\"10 9\",\"pages\":\"\"},\"PeriodicalIF\":3.9000,\"publicationDate\":\"2025-08-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12467914/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Biomimetics\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.3390/biomimetics10090558\",\"RegionNum\":3,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, MULTIDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Biomimetics","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.3390/biomimetics10090558","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
摘要
同时定位和绘图(SLAM)是移动机器人的一项基本技术,可以实现自主导航和环境重建。然而,现实场景中的动态元素(如行走的行人、移动的车辆和摆动的门)通常会引入导致定位错误的不可靠特征,从而降低SLAM的性能。在本文中,我们将动态区域定义为场景中包含运动物体的区域,动态特征定义为从这些区域中提取的可能对定位精度产生不利影响的视觉特征。受整合语义意识和几何线索的生物感知策略的启发,我们提出了实例级点-线SLAM (IPL-SLAM),这是一种针对动态环境的鲁棒视觉SLAM框架。该系统采用基于yolov8的实例分割来检测潜在的动态区域并构建语义先验,同时使用Oriented FAST (feature from Accelerated Segment Test)和rotation BRIEF (Binary Robust Independent Elementary features,简称ORB)和线段检测器(line Segment Detector, LSD)算法提取点和线特征。采用运动一致性检查和角偏差分析对动态特征进行滤波,采用自适应权值误差函数进行位姿优化。进一步构造静态语义点云图,增强场景理解能力。在TUM RGB-D数据集上的实验结果表明,在复杂室内环境下,IPL-SLAM在弹道精度和鲁棒性方面显著优于现有的动态SLAM系统(包括DS-SLAM和orb -SLAM2)。
Towards Biologically-Inspired Visual SLAM in Dynamic Environments: IPL-SLAM with Instance Segmentation and Point-Line Feature Fusion.
Simultaneous Localization and Mapping (SLAM) is a fundamental technique in mobile robotics, enabling autonomous navigation and environmental reconstruction. However, dynamic elements in real-world scenes-such as walking pedestrians, moving vehicles, and swinging doors-often degrade SLAM performance by introducing unreliable features that cause localization errors. In this paper, we define dynamic regions as areas in the scene containing moving objects, and dynamic features as the visual features extracted from these regions that may adversely affect localization accuracy. Inspired by biological perception strategies that integrate semantic awareness and geometric cues, we propose Instance-level Point-Line SLAM (IPL-SLAM), a robust visual SLAM framework for dynamic environments. The system employs YOLOv8-based instance segmentation to detect potential dynamic regions and construct semantic priors, while simultaneously extracting point and line features using Oriented FAST (Features from Accelerated Segment Test) and Rotated BRIEF (Binary Robust Independent Elementary Features), collectively known as ORB, and Line Segment Detector (LSD) algorithms. Motion consistency checks and angular deviation analysis are applied to filter dynamic features, and pose optimization is conducted using an adaptive-weight error function. A static semantic point cloud map is further constructed to enhance scene understanding. Experimental results on the TUM RGB-D dataset demonstrate that IPL-SLAM significantly outperforms existing dynamic SLAM systems-including DS-SLAM and ORB-SLAM2-in terms of trajectory accuracy and robustness in complex indoor environments.