{"title":"基于改进点-线特征融合的视觉SLAM算法研究","authors":"Yu Zhang, Miao Dong","doi":"10.1109/prmvia58252.2023.00046","DOIUrl":null,"url":null,"abstract":"SLAM (simultaneous localization and mapping), will further known as synchronous localization and mapping, is a technology that is used to tackle the issue of localization and map building while a robot travels in an unfamiliar environment. Traditional SLAM relies on point features to estimate camera pose, which makes it difficult to extract enough point features in low-texture scenes. When the camera shakes violently or rotates too fast, the robustness of a point-based SLAM system is poor. Aiming at the problem of poor robustness of the existing visual SLAM (synchronous localization and mapping technology) system, based on the ORB-SLAM3 framework, the point feature extractor is replaced with a self-supervised deep neural network, and a matching filtering algorithm based on threshold and motion statistics is proposed to eliminate point mismatch, this significantly accelerates the system’s real- time and accuracy. Likewise, linear activities are integrated into the front-end information extraction, a linear feature extraction model is established, approximation linear features are merged and processed, and the linear feature description and mismatching eradication process are simplified. Finally, the weight allocation idea is introduced into the construction of the point and line error model, and the weight of the point and line is reasonably allocated according to the richness of the scene. Experiments on absolute error trajectory on the TUM dataset emphasize that the revised algorithm increased efficiency and stability when compared to the ORB-SLAM3 system.","PeriodicalId":221346,"journal":{"name":"2023 International Conference on Pattern Recognition, Machine Vision and Intelligent Algorithms (PRMVIA)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Research on visual SLAM algorithm based on improved point-line feature fusion\",\"authors\":\"Yu Zhang, Miao Dong\",\"doi\":\"10.1109/prmvia58252.2023.00046\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"SLAM (simultaneous localization and mapping), will further known as synchronous localization and mapping, is a technology that is used to tackle the issue of localization and map building while a robot travels in an unfamiliar environment. Traditional SLAM relies on point features to estimate camera pose, which makes it difficult to extract enough point features in low-texture scenes. When the camera shakes violently or rotates too fast, the robustness of a point-based SLAM system is poor. Aiming at the problem of poor robustness of the existing visual SLAM (synchronous localization and mapping technology) system, based on the ORB-SLAM3 framework, the point feature extractor is replaced with a self-supervised deep neural network, and a matching filtering algorithm based on threshold and motion statistics is proposed to eliminate point mismatch, this significantly accelerates the system’s real- time and accuracy. Likewise, linear activities are integrated into the front-end information extraction, a linear feature extraction model is established, approximation linear features are merged and processed, and the linear feature description and mismatching eradication process are simplified. Finally, the weight allocation idea is introduced into the construction of the point and line error model, and the weight of the point and line is reasonably allocated according to the richness of the scene. Experiments on absolute error trajectory on the TUM dataset emphasize that the revised algorithm increased efficiency and stability when compared to the ORB-SLAM3 system.\",\"PeriodicalId\":221346,\"journal\":{\"name\":\"2023 International Conference on Pattern Recognition, Machine Vision and Intelligent Algorithms (PRMVIA)\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-03-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 International Conference on Pattern Recognition, Machine Vision and Intelligent Algorithms (PRMVIA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/prmvia58252.2023.00046\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 International Conference on Pattern Recognition, Machine Vision and Intelligent Algorithms (PRMVIA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/prmvia58252.2023.00046","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Research on visual SLAM algorithm based on improved point-line feature fusion
SLAM (simultaneous localization and mapping), will further known as synchronous localization and mapping, is a technology that is used to tackle the issue of localization and map building while a robot travels in an unfamiliar environment. Traditional SLAM relies on point features to estimate camera pose, which makes it difficult to extract enough point features in low-texture scenes. When the camera shakes violently or rotates too fast, the robustness of a point-based SLAM system is poor. Aiming at the problem of poor robustness of the existing visual SLAM (synchronous localization and mapping technology) system, based on the ORB-SLAM3 framework, the point feature extractor is replaced with a self-supervised deep neural network, and a matching filtering algorithm based on threshold and motion statistics is proposed to eliminate point mismatch, this significantly accelerates the system’s real- time and accuracy. Likewise, linear activities are integrated into the front-end information extraction, a linear feature extraction model is established, approximation linear features are merged and processed, and the linear feature description and mismatching eradication process are simplified. Finally, the weight allocation idea is introduced into the construction of the point and line error model, and the weight of the point and line is reasonably allocated according to the richness of the scene. Experiments on absolute error trajectory on the TUM dataset emphasize that the revised algorithm increased efficiency and stability when compared to the ORB-SLAM3 system.