{"title":"LuViRA 数据集验证与讨论:比较用于室内定位的视觉、无线电和音频传感器","authors":"Ilayda Yaman;Guoda Tian;Erik Tegler;Jens Gulin;Nikhil Challa;Fredrik Tufvesson;Ove Edfors;Kalle Åström;Steffen Malkowsky;Liang Liu","doi":"10.1109/JISPIN.2024.3429110","DOIUrl":null,"url":null,"abstract":"In this article, we present a unique comparative analysis, and evaluation of vision-, radio-, and audio-based localization algorithms. We create the first baseline for the aforementioned sensors using the recently published Lund University Vision, Radio, and Audio dataset, where all the sensors are synchronized and measured in the same environment. Some of the challenges of using each specific sensor for indoor localization tasks are highlighted. Each sensor is paired with a current state-of-the-art localization algorithm and evaluated for different aspects: localization accuracy, reliability and sensitivity to environment changes, calibration requirements, and potential system complexity. Specifically, the evaluation covers the Oriented FAST and Rotated BRIEF simultaneous localization and mapping (SLAM) algorithm for vision-based localization with an RGB-D camera, a machine learning algorithm for radio-based localization with massive multiple-input multiple-output (MIMO) technology, and the StructureFromSound2 algorithm for audio-based localization with distributed microphones. The results can serve as a guideline and basis for further development of robust and high-precision multisensory localization systems, e.g., through sensor fusion, and context- and environment-aware adaptations.","PeriodicalId":100621,"journal":{"name":"IEEE Journal of Indoor and Seamless Positioning and Navigation","volume":"2 ","pages":"240-250"},"PeriodicalIF":0.0000,"publicationDate":"2024-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10599608","citationCount":"0","resultStr":"{\"title\":\"LuViRA Dataset Validation and Discussion: Comparing Vision, Radio, and Audio Sensors for Indoor Localization\",\"authors\":\"Ilayda Yaman;Guoda Tian;Erik Tegler;Jens Gulin;Nikhil Challa;Fredrik Tufvesson;Ove Edfors;Kalle Åström;Steffen Malkowsky;Liang Liu\",\"doi\":\"10.1109/JISPIN.2024.3429110\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this article, we present a unique comparative analysis, and evaluation of vision-, radio-, and audio-based localization algorithms. We create the first baseline for the aforementioned sensors using the recently published Lund University Vision, Radio, and Audio dataset, where all the sensors are synchronized and measured in the same environment. Some of the challenges of using each specific sensor for indoor localization tasks are highlighted. Each sensor is paired with a current state-of-the-art localization algorithm and evaluated for different aspects: localization accuracy, reliability and sensitivity to environment changes, calibration requirements, and potential system complexity. Specifically, the evaluation covers the Oriented FAST and Rotated BRIEF simultaneous localization and mapping (SLAM) algorithm for vision-based localization with an RGB-D camera, a machine learning algorithm for radio-based localization with massive multiple-input multiple-output (MIMO) technology, and the StructureFromSound2 algorithm for audio-based localization with distributed microphones. The results can serve as a guideline and basis for further development of robust and high-precision multisensory localization systems, e.g., through sensor fusion, and context- and environment-aware adaptations.\",\"PeriodicalId\":100621,\"journal\":{\"name\":\"IEEE Journal of Indoor and Seamless Positioning and Navigation\",\"volume\":\"2 \",\"pages\":\"240-250\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-07-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10599608\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Journal of Indoor and Seamless Positioning and Navigation\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10599608/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Journal of Indoor and Seamless Positioning and Navigation","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10599608/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
LuViRA Dataset Validation and Discussion: Comparing Vision, Radio, and Audio Sensors for Indoor Localization
In this article, we present a unique comparative analysis, and evaluation of vision-, radio-, and audio-based localization algorithms. We create the first baseline for the aforementioned sensors using the recently published Lund University Vision, Radio, and Audio dataset, where all the sensors are synchronized and measured in the same environment. Some of the challenges of using each specific sensor for indoor localization tasks are highlighted. Each sensor is paired with a current state-of-the-art localization algorithm and evaluated for different aspects: localization accuracy, reliability and sensitivity to environment changes, calibration requirements, and potential system complexity. Specifically, the evaluation covers the Oriented FAST and Rotated BRIEF simultaneous localization and mapping (SLAM) algorithm for vision-based localization with an RGB-D camera, a machine learning algorithm for radio-based localization with massive multiple-input multiple-output (MIMO) technology, and the StructureFromSound2 algorithm for audio-based localization with distributed microphones. The results can serve as a guideline and basis for further development of robust and high-precision multisensory localization systems, e.g., through sensor fusion, and context- and environment-aware adaptations.