Feature Error Model for Integrity of Pattern-based Visual Positioning

Chen Zhu, C. Steinmetz, B. Belabbas, M. Meurer
{"title":"Feature Error Model for Integrity of Pattern-based Visual Positioning","authors":"Chen Zhu, C. Steinmetz, B. Belabbas, M. Meurer","doi":"10.33012/2019.16956","DOIUrl":null,"url":null,"abstract":"Camera-based visual navigation techniques can provide high precision infrastructure-less localization solutions using visual patterns, and play an important role in the environments where satellite navigation has significantly degraded performance in availability, accuracy, and integrity. However, the integrity monitoring of visual navigation methods is an essential but hardly-solved topic, since modelling the geometric error for cameras is rather challenging. This work proposes a highprecision geometric error model of detected feature corners for chessboard-like patterns. The model is named as Chessboard Corner Geometric Error Model (CCGEM). By applying the model to images containing chessboard-like patterns, the extracted corner location accuracy can be predicted in different lighting conditions. The coefficients in the model can be adapted to each distinct camera-lens system through a calibration process. The proposed method first models the intensity distribution in the local neighboring area of the extracted corner by taking the raw image as measurement input. Then, the geometric error of the feature location is modelled as a function of the distribution parameters. We show that the model fits the measurement error well in both simulated and real images. The proposed CCGEM also provides a conservative fitting model with risk probability information, which can be applied in the integrity monitoring of vision-based positioning. (a) Feature extraction without noise (b) Feature extraction with noise Figure 1: Photometric error and consequential geometric error in feature extraction INTRODUCTION Camera-based visual positioning has been widely investigated for autonomous landing of unmanned aerial vehicles (UAV) using a designed pattern as a landing pad. For instance, the approaches from Sharp et al. [1] and Cesetti et al. [2] have attracted great attentions of the research community. In addition, visual navigation techniques have huge potential in various applications, especially in environments such as urban areas where satellite navigation may have significantly degraded performance due to lacking of signal availability and multipath effects, e.g., as shown in the work from Narula et al. [3]. However, quantitative integrity monitoring of visual navigation is not yet a well-solved problem. Three basic components are essential for developing the visual navigation integrity. First of all, a feature location error model in nominal situations is required. Second, the dilution of precision (DOP) needs to be calculated to evaluate the geometric impact on the estimated position using cameras. Last but not least, specific fault detection and exclusion (FDE) schemes should be developed for different fault modes in visual navigation integrity monitoring. This work focuses on the development of a stochastic error model for the feature location. A stochastic error model is not only required for monitoring the nominal performance of the visual navigation methods, but can also give researchers a better understanding of the error sources in the vision measurements, so that the fault modes can be defined appropriately. Meanwhile, the characterization of the error in the extracted feature locations is however one of the largest difficulties towards vision integrity monitoring. In feature-based visual navigation methods, the coordinates of the 2D features are used as sensor measurements. However, the coordinates are indirect measurements. For camera sensors, the raw measurements are image pixel intensity values. The measurement noise of pixel intensities is normally referred to as photometric error nI , which is modeled as a zero-mean Gaussian distribution with covariance σnI . Fig. 1 illustrates the photometric noise and its impact on the feature extraction using a simple instance. Fig. 1a shows a noise-free chessboard image, and the blue ”+” marker denotes the ground truth position of the corner point. In Fig. 1b, the existence of photometric noise results in the slight variation of the black and white colors. Consequently, the corner location extracted by a feature detector, which is indicated by the red ”+” marker, is also erroneous compared to the ground truth. The error of the estimated feature location is referred to as geometric error. There are several challenges towards a general stochastic geometric error model. First, the distribution of feature points is not homogeneous, i.e., different feature points may follow distinct distributions. The lighting condition determines the intensity values, and the feature type as well as the view point influences the geometric distribution of the intensities around the feature. Either impact results in changes of the geometric error distribution. Moreover, since the feature extraction algorithms normally contain complicated and heuristic operations, describing the error transformation from photometric error in intensity domain to geometric error in feature location domain is rather challenging. In addition, the physical optical systems also have impacts on the measurement images. There is optical blur due to effects like diffraction and diffusion when the light rays pass through the lenses, which is normally described by a Gaussian point spread function (PSF) [4]. As a result, the distribution of the feature location error is dependent on the camera and lens applied. Due to the diversity in the geometric error distribution, it is not reasonable to simply build statistics from huge amount of data to derive a homogeneous distribution as the error model. Figure 2: Optical blur effect in a measurement image Though the feature geometric error distributions are necessary in visual navigation, the aforementioned challenges are not well solved yet. In state-of-the-art visual navigation methods such as ORB-SLAM [5], the geometric error covariance is normally chosen by using some heuristic values (e.g., set as 1 pixel in ORB-SLAM). This is unacceptable for integrity monitoring, since tuning the parameter for specific scenarios does not ensure that the model is valid if the visible scene changes. Reprojection error (feature location residual given the estimated pose) is widely used in visual navigation textbooks such as [6] to describe the feature error. However, the statistics of residuals is obviously not a proper error model, since the estimated states used for calculating the reprojection error can already be biased. Kumar and Osechas have shown in [7], and Edwards et al. have shown in [8] that for designed patterns, the feature location error follows a Gaussian distribution in nominal situations. Nevertheless, the results are still qualitative, since the variance of the distribution is still an ad-hoc value obtained from experiments for particular scenarios. In this work, we propose a subpixel-precision geometric error model of detected corners, named as CCGEM (Chessboard Corner Geometric Error Model). CCGEM targets at a specific type of corners (chessboard-like ’X’-junctions), which can either be from a designed landmark or be extracted natural features. It models the stochastic geometric error as a function of a few local parameters of the measurement image, which varies as the lighting or the visible scene changes. The parameters can be extracted from the local image patches around the corners with affordable complexity. Some coefficients in the model are dependent on the exploited optical instrument. These coefficients can be obtained through a calibration process for each distinct camera-lens combination, so that it is generalizable for different optical systems from end users. Therefore, CCGEM is a quantitative error model of the feature location which is generalizable for different optical systems and lighting conditions. In addition, conservative strategies are proposed in the coefficients fitting process for integrity demands.","PeriodicalId":381025,"journal":{"name":"Proceedings of the 32nd International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+ 2019)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 32nd International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+ 2019)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.33012/2019.16956","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7

Abstract

Camera-based visual navigation techniques can provide high precision infrastructure-less localization solutions using visual patterns, and play an important role in the environments where satellite navigation has significantly degraded performance in availability, accuracy, and integrity. However, the integrity monitoring of visual navigation methods is an essential but hardly-solved topic, since modelling the geometric error for cameras is rather challenging. This work proposes a highprecision geometric error model of detected feature corners for chessboard-like patterns. The model is named as Chessboard Corner Geometric Error Model (CCGEM). By applying the model to images containing chessboard-like patterns, the extracted corner location accuracy can be predicted in different lighting conditions. The coefficients in the model can be adapted to each distinct camera-lens system through a calibration process. The proposed method first models the intensity distribution in the local neighboring area of the extracted corner by taking the raw image as measurement input. Then, the geometric error of the feature location is modelled as a function of the distribution parameters. We show that the model fits the measurement error well in both simulated and real images. The proposed CCGEM also provides a conservative fitting model with risk probability information, which can be applied in the integrity monitoring of vision-based positioning. (a) Feature extraction without noise (b) Feature extraction with noise Figure 1: Photometric error and consequential geometric error in feature extraction INTRODUCTION Camera-based visual positioning has been widely investigated for autonomous landing of unmanned aerial vehicles (UAV) using a designed pattern as a landing pad. For instance, the approaches from Sharp et al. [1] and Cesetti et al. [2] have attracted great attentions of the research community. In addition, visual navigation techniques have huge potential in various applications, especially in environments such as urban areas where satellite navigation may have significantly degraded performance due to lacking of signal availability and multipath effects, e.g., as shown in the work from Narula et al. [3]. However, quantitative integrity monitoring of visual navigation is not yet a well-solved problem. Three basic components are essential for developing the visual navigation integrity. First of all, a feature location error model in nominal situations is required. Second, the dilution of precision (DOP) needs to be calculated to evaluate the geometric impact on the estimated position using cameras. Last but not least, specific fault detection and exclusion (FDE) schemes should be developed for different fault modes in visual navigation integrity monitoring. This work focuses on the development of a stochastic error model for the feature location. A stochastic error model is not only required for monitoring the nominal performance of the visual navigation methods, but can also give researchers a better understanding of the error sources in the vision measurements, so that the fault modes can be defined appropriately. Meanwhile, the characterization of the error in the extracted feature locations is however one of the largest difficulties towards vision integrity monitoring. In feature-based visual navigation methods, the coordinates of the 2D features are used as sensor measurements. However, the coordinates are indirect measurements. For camera sensors, the raw measurements are image pixel intensity values. The measurement noise of pixel intensities is normally referred to as photometric error nI , which is modeled as a zero-mean Gaussian distribution with covariance σnI . Fig. 1 illustrates the photometric noise and its impact on the feature extraction using a simple instance. Fig. 1a shows a noise-free chessboard image, and the blue ”+” marker denotes the ground truth position of the corner point. In Fig. 1b, the existence of photometric noise results in the slight variation of the black and white colors. Consequently, the corner location extracted by a feature detector, which is indicated by the red ”+” marker, is also erroneous compared to the ground truth. The error of the estimated feature location is referred to as geometric error. There are several challenges towards a general stochastic geometric error model. First, the distribution of feature points is not homogeneous, i.e., different feature points may follow distinct distributions. The lighting condition determines the intensity values, and the feature type as well as the view point influences the geometric distribution of the intensities around the feature. Either impact results in changes of the geometric error distribution. Moreover, since the feature extraction algorithms normally contain complicated and heuristic operations, describing the error transformation from photometric error in intensity domain to geometric error in feature location domain is rather challenging. In addition, the physical optical systems also have impacts on the measurement images. There is optical blur due to effects like diffraction and diffusion when the light rays pass through the lenses, which is normally described by a Gaussian point spread function (PSF) [4]. As a result, the distribution of the feature location error is dependent on the camera and lens applied. Due to the diversity in the geometric error distribution, it is not reasonable to simply build statistics from huge amount of data to derive a homogeneous distribution as the error model. Figure 2: Optical blur effect in a measurement image Though the feature geometric error distributions are necessary in visual navigation, the aforementioned challenges are not well solved yet. In state-of-the-art visual navigation methods such as ORB-SLAM [5], the geometric error covariance is normally chosen by using some heuristic values (e.g., set as 1 pixel in ORB-SLAM). This is unacceptable for integrity monitoring, since tuning the parameter for specific scenarios does not ensure that the model is valid if the visible scene changes. Reprojection error (feature location residual given the estimated pose) is widely used in visual navigation textbooks such as [6] to describe the feature error. However, the statistics of residuals is obviously not a proper error model, since the estimated states used for calculating the reprojection error can already be biased. Kumar and Osechas have shown in [7], and Edwards et al. have shown in [8] that for designed patterns, the feature location error follows a Gaussian distribution in nominal situations. Nevertheless, the results are still qualitative, since the variance of the distribution is still an ad-hoc value obtained from experiments for particular scenarios. In this work, we propose a subpixel-precision geometric error model of detected corners, named as CCGEM (Chessboard Corner Geometric Error Model). CCGEM targets at a specific type of corners (chessboard-like ’X’-junctions), which can either be from a designed landmark or be extracted natural features. It models the stochastic geometric error as a function of a few local parameters of the measurement image, which varies as the lighting or the visible scene changes. The parameters can be extracted from the local image patches around the corners with affordable complexity. Some coefficients in the model are dependent on the exploited optical instrument. These coefficients can be obtained through a calibration process for each distinct camera-lens combination, so that it is generalizable for different optical systems from end users. Therefore, CCGEM is a quantitative error model of the feature location which is generalizable for different optical systems and lighting conditions. In addition, conservative strategies are proposed in the coefficients fitting process for integrity demands.
基于模式的视觉定位完整性特征误差模型
此外,由于特征提取算法通常包含复杂的启发式操作,因此描述从强度域的光度误差到特征位置域的几何误差的误差转换非常具有挑战性。此外,物理光学系统对测量图像也有影响。当光线通过透镜时,由于衍射和扩散等效应而产生光学模糊,这通常用高斯点扩散函数(PSF)[4]来描述。因此,特征定位误差的分布依赖于所使用的相机和镜头。由于几何误差分布的多样性,简单地从海量数据中建立统计,推导出均匀分布作为误差模型是不合理的。图2测量图像中的光学模糊效果虽然特征几何误差分布在视觉导航中是必要的,但上述挑战尚未得到很好的解决。在最先进的视觉导航方法中,如ORB-SLAM b[5],几何误差协方差通常是通过使用一些启发式值来选择的(例如,在ORB-SLAM中设置为1像素)。这对于完整性监控是不可接受的,因为针对特定场景调整参数并不能确保在可见场景发生变化时模型是有效的。重投影误差(在估计姿态下的特征位置残差)在[6]等视觉导航教材中被广泛使用来描述特征误差。然而,残差统计显然不是一个合适的误差模型,因为用于计算重投影误差的估计状态已经有偏差。Kumar和Osechas在[7]中表明,Edwards等人在[8]中表明,对于设计的模式,在名义情况下,特征定位误差遵循高斯分布。然而,结果仍然是定性的,因为分布的方差仍然是一个从特定场景的实验中获得的临时值。在这项工作中,我们提出了一种亚像素精度的检测角的几何误差模型,称为CCGEM(棋盘角几何误差模型)。CCGEM的目标是特定类型的角(棋盘状的“X”连接点),这些角可以来自设计的地标,也可以提取自然特征。它将随机几何误差建模为测量图像的几个局部参数的函数,这些参数随着照明或可见场景的变化而变化。这些参数可以以可承受的复杂度从角落周围的局部图像补丁中提取。模型中的一些系数取决于所使用的光学仪器。这些系数可以通过对每个不同相机镜头组合的校准过程获得,因此它适用于最终用户的不同光学系统。因此,CCGEM是一种适用于不同光学系统和光照条件的特征定位定量误差模型。此外,针对完整性要求,在系数拟合过程中提出了保守策略。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信