{"title":"实时变分立体重建及其在大规模密集SLAM中的应用","authors":"G. Kuschk, Aljaz Bozic, D. Cremers","doi":"10.1109/IVS.2017.7995899","DOIUrl":null,"url":null,"abstract":"We propose an algorithm for dense and direct large-scale visual SLAM that runs in real-time on a commodity notebook. A fast variational dense 3D reconstruction algorithm was developed which robustly integrates data terms from multiple images. This mitigates the effect of the aperture problem and is demonstrated on synthetic and real data. An additional property of the variational reconstruction framework is the ability to integrate sparse depth priors (e.g. from RGB-D sensors or LiDAR data) into the early stages of the visual depth reconstruction, leading to an implicit sensor fusion scheme for a variable number of heterogenous depth sensors. Embedded into a keyframe-based SLAM framework, this results in a memory efficient representation of the scene and therefore (in combination with loop-closure detection and pose tracking via direct image alignment) enables us to densely reconstruct large scenes in real-time. Experimental validation on the KITTI dataset shows that our method can recover large-scale and dense reconstructions of entire street scenes in real-time from a driving car.","PeriodicalId":143367,"journal":{"name":"2017 IEEE Intelligent Vehicles Symposium (IV)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"12","resultStr":"{\"title\":\"Real-time variational stereo reconstruction with applications to large-scale dense SLAM\",\"authors\":\"G. Kuschk, Aljaz Bozic, D. Cremers\",\"doi\":\"10.1109/IVS.2017.7995899\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We propose an algorithm for dense and direct large-scale visual SLAM that runs in real-time on a commodity notebook. A fast variational dense 3D reconstruction algorithm was developed which robustly integrates data terms from multiple images. This mitigates the effect of the aperture problem and is demonstrated on synthetic and real data. An additional property of the variational reconstruction framework is the ability to integrate sparse depth priors (e.g. from RGB-D sensors or LiDAR data) into the early stages of the visual depth reconstruction, leading to an implicit sensor fusion scheme for a variable number of heterogenous depth sensors. Embedded into a keyframe-based SLAM framework, this results in a memory efficient representation of the scene and therefore (in combination with loop-closure detection and pose tracking via direct image alignment) enables us to densely reconstruct large scenes in real-time. Experimental validation on the KITTI dataset shows that our method can recover large-scale and dense reconstructions of entire street scenes in real-time from a driving car.\",\"PeriodicalId\":143367,\"journal\":{\"name\":\"2017 IEEE Intelligent Vehicles Symposium (IV)\",\"volume\":\"22 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"12\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2017 IEEE Intelligent Vehicles Symposium (IV)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IVS.2017.7995899\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE Intelligent Vehicles Symposium (IV)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IVS.2017.7995899","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Real-time variational stereo reconstruction with applications to large-scale dense SLAM
We propose an algorithm for dense and direct large-scale visual SLAM that runs in real-time on a commodity notebook. A fast variational dense 3D reconstruction algorithm was developed which robustly integrates data terms from multiple images. This mitigates the effect of the aperture problem and is demonstrated on synthetic and real data. An additional property of the variational reconstruction framework is the ability to integrate sparse depth priors (e.g. from RGB-D sensors or LiDAR data) into the early stages of the visual depth reconstruction, leading to an implicit sensor fusion scheme for a variable number of heterogenous depth sensors. Embedded into a keyframe-based SLAM framework, this results in a memory efficient representation of the scene and therefore (in combination with loop-closure detection and pose tracking via direct image alignment) enables us to densely reconstruct large scenes in real-time. Experimental validation on the KITTI dataset shows that our method can recover large-scale and dense reconstructions of entire street scenes in real-time from a driving car.