{"title":"hyperlam:一种通用和模块化的连续时间传感器融合和同步定位和映射方法","authors":"David Hug, M. Chli","doi":"10.1109/3DV50981.2020.00108","DOIUrl":null,"url":null,"abstract":"Within recent years, Continuous-Time Simultaneous Localization And Mapping (CTSLAM) formalisms have become subject to increased attention from the scientific community due to their vast potential in facilitating motion corrected feature reprojection and direct unsynchronized multi-rate sensor fusion. They also hold the promise of yielding better estimates in traditional sensor setups (e.g. visual, inertial) when compared to conventional discrete-time approaches. Related works mostly rely on cubic, $C^{2}-$continuous, uniform cumulative B-Splines to exemplify and demonstrate the benefits inherent to continuous-time representations. However, as this type of splines gives rise to continuous trajectories by blending uniformly distributed $\\mathbb{SE}_{3}$ transformations in time, it is prone to under- or overparametrize underlying motions with varying volatility and prohibits dynamic trajectory refinement or sparsification by design. In light of this, we propose employing a more generalized and efficient non-uniform split interpolation method in $\\mathbb{R}\\times \\mathbb{SU}_{2}\\times \\mathbb{R}^{3}$ and commence with development of ‘HyperSLAM’, a generic and modular CTSLAM framework. The efficacy of our approach is exemplified in proof-of-concept simulations based on a visual, monocular setup.","PeriodicalId":293399,"journal":{"name":"2020 International Conference on 3D Vision (3DV)","volume":"22 14_suppl 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"HyperSLAM: A Generic and Modular Approach to Sensor Fusion and Simultaneous Localization And Mapping in Continuous-Time\",\"authors\":\"David Hug, M. Chli\",\"doi\":\"10.1109/3DV50981.2020.00108\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Within recent years, Continuous-Time Simultaneous Localization And Mapping (CTSLAM) formalisms have become subject to increased attention from the scientific community due to their vast potential in facilitating motion corrected feature reprojection and direct unsynchronized multi-rate sensor fusion. They also hold the promise of yielding better estimates in traditional sensor setups (e.g. visual, inertial) when compared to conventional discrete-time approaches. Related works mostly rely on cubic, $C^{2}-$continuous, uniform cumulative B-Splines to exemplify and demonstrate the benefits inherent to continuous-time representations. However, as this type of splines gives rise to continuous trajectories by blending uniformly distributed $\\\\mathbb{SE}_{3}$ transformations in time, it is prone to under- or overparametrize underlying motions with varying volatility and prohibits dynamic trajectory refinement or sparsification by design. In light of this, we propose employing a more generalized and efficient non-uniform split interpolation method in $\\\\mathbb{R}\\\\times \\\\mathbb{SU}_{2}\\\\times \\\\mathbb{R}^{3}$ and commence with development of ‘HyperSLAM’, a generic and modular CTSLAM framework. The efficacy of our approach is exemplified in proof-of-concept simulations based on a visual, monocular setup.\",\"PeriodicalId\":293399,\"journal\":{\"name\":\"2020 International Conference on 3D Vision (3DV)\",\"volume\":\"22 14_suppl 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 International Conference on 3D Vision (3DV)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/3DV50981.2020.00108\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 International Conference on 3D Vision (3DV)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/3DV50981.2020.00108","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
HyperSLAM: A Generic and Modular Approach to Sensor Fusion and Simultaneous Localization And Mapping in Continuous-Time
Within recent years, Continuous-Time Simultaneous Localization And Mapping (CTSLAM) formalisms have become subject to increased attention from the scientific community due to their vast potential in facilitating motion corrected feature reprojection and direct unsynchronized multi-rate sensor fusion. They also hold the promise of yielding better estimates in traditional sensor setups (e.g. visual, inertial) when compared to conventional discrete-time approaches. Related works mostly rely on cubic, $C^{2}-$continuous, uniform cumulative B-Splines to exemplify and demonstrate the benefits inherent to continuous-time representations. However, as this type of splines gives rise to continuous trajectories by blending uniformly distributed $\mathbb{SE}_{3}$ transformations in time, it is prone to under- or overparametrize underlying motions with varying volatility and prohibits dynamic trajectory refinement or sparsification by design. In light of this, we propose employing a more generalized and efficient non-uniform split interpolation method in $\mathbb{R}\times \mathbb{SU}_{2}\times \mathbb{R}^{3}$ and commence with development of ‘HyperSLAM’, a generic and modular CTSLAM framework. The efficacy of our approach is exemplified in proof-of-concept simulations based on a visual, monocular setup.