Combining Visual, Pedestrian, and Collaborative Navigation Techniques for Team Based Infrastructure Free Indoor Navigation

Aiden Morrison, L. Ruotsalainen, Maija Makela, Jesperi Rantanen, N. Sokolova
{"title":"Combining Visual, Pedestrian, and Collaborative Navigation Techniques for Team Based Infrastructure Free Indoor Navigation","authors":"Aiden Morrison, L. Ruotsalainen, Maija Makela, Jesperi Rantanen, N. Sokolova","doi":"10.33012/2019.17098","DOIUrl":null,"url":null,"abstract":"In this paper the authors describe the design and evaluation of a multi sensor integrated navigation system specifically targeted at teams of cooperating users operating in transient indoor conditions such as would be encountered by emergency services personnel or soldiers entering unknown buildings. Since these conditions preclude the use of dedicated indoor infrastructure the system depends on the combination of multiple self contained navigation sensors as well as dynamic networking and ranging between the users to form a decentralized cooperative navigating team. Within this paper we will discuss the design and evaluation of a system developed within a North Atlantic Treaty Organization (NATO) Science for Peace and Security (SPS) project executed by the SINTEF and the Finnish Geospatial Researcher Institute (FGI) during 2018 and 2019. The motivation of this project was to combine the expertise of the FGI in pedestrian and camera based infrastructure free navigation with the collaborative navigation and integrated navigation system design expertise of SINTEF towards the accurate navigation and continuous situational awareness of teams of cooperating users. When completed, the combined navigation system will be a shoulder mounted package which comprises a triple frequency GNSS receiver for rapid outdoor initialization, as well as a Micro Electro Mechanical System (MEMS) Inertial Measurement Unit (IMU), barometer, magnetometer, three different navigation and communication radios as well as a stereo vision plus depth sensing camera connected to and synchronized by an integrated processor platform. Two of the three radios provide for user-to-user range measurement and data exchange via each of 2.4 GHz and Ultra Wide-Band (UWB) signals to allow for collaborative navigation as well as situational awareness within the network, while the 3 rd radio provides a link to separate navigation sensors such as a foot mounted IMU pod for enhanced Pedestrian Dead Reckoning (PDR). The integrated camera provides stereo color imaging as well as structured light based infrared depth sensing, while the processor platform is responsible for data collection and processing. Introduction The motivation in pursuing infrastructure free navigation systems relates to the fact that certain classes of user including firefighters, law enforcement, soldiers and others must enter hazardous indoor environments on short notice and without detailed knowledge of the interior structure, layout or contents of these buildings. Additionally, since the building might be on fire or otherwise denied electrical power, reliance on even ad-hoc infrastructure such as Wi-Fi routers may not be a reliable source of navigation data. Assuming that the building materials block the majority of GNSS signals to the users, the remaining options are typically those sources of information that are self-contained to the individual user such as inertial sensors and visual odometry (VO) to allow each user to navigate free of infrastructure, as well as leveraging the collective network via user to user radio links to realize collaborative navigation within the team. Background The Infrastructure-free tactical situational awareness (INTACT) project, funded by the Finnish Scientific Advisory Board for Defence (MATINE) for years 2015-2017, analyzed and developed methods for infrastructure–free simultaneous localization and mapping (SLAM) and context recognition for tactical situational awareness using only measurements obtained from small and low-cost MEMS sensors mounted on the body of the user. More precisely, during the project error analysis, and estimation methods were developed for obtaining accurate and reliable horizontal position solution fusing measurements from inertial sensors and computer vision and vertical position solution from fusing barometer and sonar observations [1]. Machine learning was used for detecting the user motion and thereby adjusting the estimation parameters and thresholds for improved solution [2]. At the end of the project a proof-of-concept was carried out at the military premises in Finland by two soldiers. Computation of the fused navigation solution was complicated by exposing inertial sensors and the camera to atypical motion and harsh impacts, such as jumps, running and climbing stall bars sideways. The final result, accuracy being 1% of the travelled path, was analyzed to be comparable with state-of-the-art infrastructure-free navigation solutions made by walking forward along a largely straight path [3]. SINTEF had previously conducted multiple projects exploring the feasibility of team based navigation in outdoor-indoor building entry scenarios, and through work funded by the Norwegian Battle Lab and Experimentation (NOBLE), prototype shoulder mounted navigation systems comprising GNSS, inertial and dual user-to-user range estimating radio modules which allowed direct implementation and testing of the collaborative navigation concepts explained in the next subsection. In these initial studies the navigation performance of each individual user was enhanced, relative user to user error was reduced, and the situational awareness of the overall team status was greatly enhanced through the periodic forwarding of 3 rd party user status when performing ranging cycles, allowing a hypothetical supervisor or vehicle mounted node outside the building to serve as both a reference anchor for ranging but also to maintain knowledge of the entire team even when Line-of-Sight (LOS) communication was blocked to most team members. While the INTACT project and collaborative navigation projects both achieved respectable performance during their respective testing, the individual systems had notable drawbacks, such as requiring illumination within the environment to be relatively high for proper operation of the visual odometry within the INTACT system, and the long term systemic drift of the collaborative navigation systems when the entire team was isolated from absolute position reference for an extended period of time. Before moving further, a more detailed explanation of the techniques that are to be combined in this study are now presented. CORE TECHNIQUES The combined SINTEF FGI navigation system, and the tests conducted in this study rely on several sources of navigation data throughout a typical outdoor-indoor trajectory. While GNSS is only used during system initialization, and barometry provides only a height constraint, the primary sources of Position Velocity and Attitude (PVA) estimation are derived from Collaborative Navigation, VO, and PDR, the implementations of which are now discussed. Collaborative Navigation Collaborative navigation is based on an idea of using team members as local beacons. By measuring distance to other team members and utilizing those measurements along with shared location information, the navigation solution can be enhanced especially in GNSS-challenged environments. Collaborative navigation approaches can provide position estimate in a global coordinate frame also to team members without access to GNSS signals, given that at least some team members are able to use satellite navigation [4]. Even if the whole team has no GNSS signal available, as can happen for instance in indoor environments, they can estimate their absolute and relative positions using the range constraints along with the estimates formed by their inertial navigation sensors. The position estimation algorithm in collaborative navigation can be either centralized or de-centralized [5]. In the centralized approach, all measurements made by the team members are transmitted to some central processing unit. The unit computes the position estimates and transmits them back to the collaborators. In de-centralized approach each team member uses only measurements made locally, and position plus range estimates broadcast by other team members which can be directly communicated with. Compared to centralized position estimation, the de-centralized approach requires less communications over the network, scales better with the size of the team [5][6], and is tolerant of extended gaps in communication between individuals or groups of users. The key element in collaborative navigation is distance measurements between the team members. UWB ranging suits well for the application at hand, as it is tolerant to multipath and can also be used through walls up to some extent [7]. However, this can make sensor fusion more challenging as in Non-Line-of-Sight (NLOS) situations the UWB distance measurement error is not necessarily Gaussian [8], which is a requirement for Extended Kalman Filter (EKF) [9] commonly used in navigation applications. Overbounding Gaussian distributions can be used in the EKF but this approach does not necessarily provide optimal results [10]. Without GNSS clock synchronization between the ranging devices can be difficult to maintain, but by using Two-Way Time-of-Arrival (TOA) distance measurements or synchronizing a sufficiently stable local oscillator when GNSS is available the requirement of clock synchronization can be avoided. In this project, a completely de-centralized implementation is adopted as the target environments are those where point to point communication will be unreliable, and therefore centralized processing of data with reasonable latency is not considered feasible. In this de-centralized implementation users periodically announce their presence to other users in range, who keep an updated list of which users are recently visible and therefore considered valid targets for ranging and communication. Multiple-access for up to 32 nodes is achieved through time slicing based on user addresses, with synchronization of the mobile nodes to a common time base achieved via use of the onboard GNSS receiver during initialization, and carried forward by a local oscillator with stability sufficient to maintain valid access patterns for ove","PeriodicalId":381025,"journal":{"name":"Proceedings of the 32nd International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+ 2019)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 32nd International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+ 2019)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.33012/2019.17098","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

In this paper the authors describe the design and evaluation of a multi sensor integrated navigation system specifically targeted at teams of cooperating users operating in transient indoor conditions such as would be encountered by emergency services personnel or soldiers entering unknown buildings. Since these conditions preclude the use of dedicated indoor infrastructure the system depends on the combination of multiple self contained navigation sensors as well as dynamic networking and ranging between the users to form a decentralized cooperative navigating team. Within this paper we will discuss the design and evaluation of a system developed within a North Atlantic Treaty Organization (NATO) Science for Peace and Security (SPS) project executed by the SINTEF and the Finnish Geospatial Researcher Institute (FGI) during 2018 and 2019. The motivation of this project was to combine the expertise of the FGI in pedestrian and camera based infrastructure free navigation with the collaborative navigation and integrated navigation system design expertise of SINTEF towards the accurate navigation and continuous situational awareness of teams of cooperating users. When completed, the combined navigation system will be a shoulder mounted package which comprises a triple frequency GNSS receiver for rapid outdoor initialization, as well as a Micro Electro Mechanical System (MEMS) Inertial Measurement Unit (IMU), barometer, magnetometer, three different navigation and communication radios as well as a stereo vision plus depth sensing camera connected to and synchronized by an integrated processor platform. Two of the three radios provide for user-to-user range measurement and data exchange via each of 2.4 GHz and Ultra Wide-Band (UWB) signals to allow for collaborative navigation as well as situational awareness within the network, while the 3 rd radio provides a link to separate navigation sensors such as a foot mounted IMU pod for enhanced Pedestrian Dead Reckoning (PDR). The integrated camera provides stereo color imaging as well as structured light based infrared depth sensing, while the processor platform is responsible for data collection and processing. Introduction The motivation in pursuing infrastructure free navigation systems relates to the fact that certain classes of user including firefighters, law enforcement, soldiers and others must enter hazardous indoor environments on short notice and without detailed knowledge of the interior structure, layout or contents of these buildings. Additionally, since the building might be on fire or otherwise denied electrical power, reliance on even ad-hoc infrastructure such as Wi-Fi routers may not be a reliable source of navigation data. Assuming that the building materials block the majority of GNSS signals to the users, the remaining options are typically those sources of information that are self-contained to the individual user such as inertial sensors and visual odometry (VO) to allow each user to navigate free of infrastructure, as well as leveraging the collective network via user to user radio links to realize collaborative navigation within the team. Background The Infrastructure-free tactical situational awareness (INTACT) project, funded by the Finnish Scientific Advisory Board for Defence (MATINE) for years 2015-2017, analyzed and developed methods for infrastructure–free simultaneous localization and mapping (SLAM) and context recognition for tactical situational awareness using only measurements obtained from small and low-cost MEMS sensors mounted on the body of the user. More precisely, during the project error analysis, and estimation methods were developed for obtaining accurate and reliable horizontal position solution fusing measurements from inertial sensors and computer vision and vertical position solution from fusing barometer and sonar observations [1]. Machine learning was used for detecting the user motion and thereby adjusting the estimation parameters and thresholds for improved solution [2]. At the end of the project a proof-of-concept was carried out at the military premises in Finland by two soldiers. Computation of the fused navigation solution was complicated by exposing inertial sensors and the camera to atypical motion and harsh impacts, such as jumps, running and climbing stall bars sideways. The final result, accuracy being 1% of the travelled path, was analyzed to be comparable with state-of-the-art infrastructure-free navigation solutions made by walking forward along a largely straight path [3]. SINTEF had previously conducted multiple projects exploring the feasibility of team based navigation in outdoor-indoor building entry scenarios, and through work funded by the Norwegian Battle Lab and Experimentation (NOBLE), prototype shoulder mounted navigation systems comprising GNSS, inertial and dual user-to-user range estimating radio modules which allowed direct implementation and testing of the collaborative navigation concepts explained in the next subsection. In these initial studies the navigation performance of each individual user was enhanced, relative user to user error was reduced, and the situational awareness of the overall team status was greatly enhanced through the periodic forwarding of 3 rd party user status when performing ranging cycles, allowing a hypothetical supervisor or vehicle mounted node outside the building to serve as both a reference anchor for ranging but also to maintain knowledge of the entire team even when Line-of-Sight (LOS) communication was blocked to most team members. While the INTACT project and collaborative navigation projects both achieved respectable performance during their respective testing, the individual systems had notable drawbacks, such as requiring illumination within the environment to be relatively high for proper operation of the visual odometry within the INTACT system, and the long term systemic drift of the collaborative navigation systems when the entire team was isolated from absolute position reference for an extended period of time. Before moving further, a more detailed explanation of the techniques that are to be combined in this study are now presented. CORE TECHNIQUES The combined SINTEF FGI navigation system, and the tests conducted in this study rely on several sources of navigation data throughout a typical outdoor-indoor trajectory. While GNSS is only used during system initialization, and barometry provides only a height constraint, the primary sources of Position Velocity and Attitude (PVA) estimation are derived from Collaborative Navigation, VO, and PDR, the implementations of which are now discussed. Collaborative Navigation Collaborative navigation is based on an idea of using team members as local beacons. By measuring distance to other team members and utilizing those measurements along with shared location information, the navigation solution can be enhanced especially in GNSS-challenged environments. Collaborative navigation approaches can provide position estimate in a global coordinate frame also to team members without access to GNSS signals, given that at least some team members are able to use satellite navigation [4]. Even if the whole team has no GNSS signal available, as can happen for instance in indoor environments, they can estimate their absolute and relative positions using the range constraints along with the estimates formed by their inertial navigation sensors. The position estimation algorithm in collaborative navigation can be either centralized or de-centralized [5]. In the centralized approach, all measurements made by the team members are transmitted to some central processing unit. The unit computes the position estimates and transmits them back to the collaborators. In de-centralized approach each team member uses only measurements made locally, and position plus range estimates broadcast by other team members which can be directly communicated with. Compared to centralized position estimation, the de-centralized approach requires less communications over the network, scales better with the size of the team [5][6], and is tolerant of extended gaps in communication between individuals or groups of users. The key element in collaborative navigation is distance measurements between the team members. UWB ranging suits well for the application at hand, as it is tolerant to multipath and can also be used through walls up to some extent [7]. However, this can make sensor fusion more challenging as in Non-Line-of-Sight (NLOS) situations the UWB distance measurement error is not necessarily Gaussian [8], which is a requirement for Extended Kalman Filter (EKF) [9] commonly used in navigation applications. Overbounding Gaussian distributions can be used in the EKF but this approach does not necessarily provide optimal results [10]. Without GNSS clock synchronization between the ranging devices can be difficult to maintain, but by using Two-Way Time-of-Arrival (TOA) distance measurements or synchronizing a sufficiently stable local oscillator when GNSS is available the requirement of clock synchronization can be avoided. In this project, a completely de-centralized implementation is adopted as the target environments are those where point to point communication will be unreliable, and therefore centralized processing of data with reasonable latency is not considered feasible. In this de-centralized implementation users periodically announce their presence to other users in range, who keep an updated list of which users are recently visible and therefore considered valid targets for ranging and communication. Multiple-access for up to 32 nodes is achieved through time slicing based on user addresses, with synchronization of the mobile nodes to a common time base achieved via use of the onboard GNSS receiver during initialization, and carried forward by a local oscillator with stability sufficient to maintain valid access patterns for ove
结合视觉、行人和协作导航技术的基于团队的基础设施自由室内导航
在本文中,作者描述了一种多传感器集成导航系统的设计和评估,该系统专门针对在瞬态室内条件下操作的协作用户团队,例如紧急服务人员或士兵进入未知建筑物时可能遇到的情况。由于这些条件排除了专用室内基础设施的使用,该系统依赖于多个自包含导航传感器的组合以及用户之间的动态网络和测距,以形成分散的协作导航团队。在本文中,我们将讨论在2018年和2019年期间由SINTEF和芬兰地理空间研究所(FGI)执行的北大西洋公约组织(NATO)和平与安全科学(SPS)项目中开发的系统的设计和评估。该项目的动机是将FGI在行人和基于摄像头的基础设施自由导航方面的专业知识与SINTEF的协作导航和集成导航系统设计专业知识结合起来,为合作用户团队提供准确的导航和持续的态势感知。完成后,组合导航系统将是一个肩背式封装,包括用于快速室外初始化的三频GNSS接收器,以及微机电系统(MEMS)惯性测量单元(IMU),气压计,磁力计,三种不同的导航和通信无线电,以及由集成处理器平台连接和同步的立体视觉和深度感测相机。三个无线电中有两个通过2.4 GHz和超宽带(UWB)信号提供用户对用户范围测量和数据交换,以支持网络内的协同导航和态势感知,而第三个无线电提供与独立导航传感器的链接,例如脚上安装的IMU吊舱,用于增强行人航位估算(PDR)。集成摄像头提供立体彩色成像以及基于结构光的红外深度传感,而处理器平台负责数据采集和处理。追求基础设施自由导航系统的动机涉及这样一个事实,即某些类别的用户,包括消防员,执法人员,士兵和其他人必须在短时间内进入危险的室内环境,而不详细了解这些建筑物的内部结构,布局或内容。此外,由于建筑物可能着火或以其他方式断电,因此即使依赖Wi-Fi路由器等临时基础设施也可能不是可靠的导航数据来源。假设建筑材料阻挡了大部分GNSS信号给用户,剩下的选项通常是那些独立的信息来源,如惯性传感器和视觉里程计(VO),允许每个用户在没有基础设施的情况下导航,以及通过用户对用户无线电链路利用集体网络实现团队内的协同导航。无基础设施战术态势感知(完整)项目由芬兰国防科学咨询委员会(MATINE)于2015-2017年资助,分析和开发了无基础设施同步定位和测绘(SLAM)以及战术态势感知的上下文识别方法,仅使用安装在用户身上的小型和低成本MEMS传感器获得的测量数据。更确切地说,在项目误差分析和估计过程中,开发了从惯性传感器和计算机视觉获得准确可靠的水平位置解的融合测量和从气压计和声纳观测获得准确可靠的垂直位置解的融合方法[1]。机器学习用于检测用户运动,从而调整估计参数和阈值以改进解决方案[2]。在项目结束时,两名士兵在芬兰的军事场所进行了概念验证。融合导航解决方案的计算非常复杂,因为惯性传感器和相机暴露在非典型运动和剧烈撞击中,例如跳跃、奔跑和横向爬升失速杆。经过分析,最终结果的精确度为行进路径的1%,可与最先进的无基础设施导航解决方案相媲美,该解决方案主要是沿着一条直线前进[3]。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信