{"title":"Feature Error Model for Integrity of Pattern-based Visual Positioning","authors":"Chen Zhu, C. Steinmetz, B. Belabbas, M. Meurer","doi":"10.33012/2019.16956","DOIUrl":null,"url":null,"abstract":"Camera-based visual navigation techniques can provide high precision infrastructure-less localization solutions using visual patterns, and play an important role in the environments where satellite navigation has significantly degraded performance in availability, accuracy, and integrity. However, the integrity monitoring of visual navigation methods is an essential but hardly-solved topic, since modelling the geometric error for cameras is rather challenging. This work proposes a highprecision geometric error model of detected feature corners for chessboard-like patterns. The model is named as Chessboard Corner Geometric Error Model (CCGEM). By applying the model to images containing chessboard-like patterns, the extracted corner location accuracy can be predicted in different lighting conditions. The coefficients in the model can be adapted to each distinct camera-lens system through a calibration process. The proposed method first models the intensity distribution in the local neighboring area of the extracted corner by taking the raw image as measurement input. Then, the geometric error of the feature location is modelled as a function of the distribution parameters. We show that the model fits the measurement error well in both simulated and real images. The proposed CCGEM also provides a conservative fitting model with risk probability information, which can be applied in the integrity monitoring of vision-based positioning. (a) Feature extraction without noise (b) Feature extraction with noise Figure 1: Photometric error and consequential geometric error in feature extraction INTRODUCTION Camera-based visual positioning has been widely investigated for autonomous landing of unmanned aerial vehicles (UAV) using a designed pattern as a landing pad. For instance, the approaches from Sharp et al. [1] and Cesetti et al. [2] have attracted great attentions of the research community. In addition, visual navigation techniques have huge potential in various applications, especially in environments such as urban areas where satellite navigation may have significantly degraded performance due to lacking of signal availability and multipath effects, e.g., as shown in the work from Narula et al. [3]. However, quantitative integrity monitoring of visual navigation is not yet a well-solved problem. Three basic components are essential for developing the visual navigation integrity. First of all, a feature location error model in nominal situations is required. Second, the dilution of precision (DOP) needs to be calculated to evaluate the geometric impact on the estimated position using cameras. Last but not least, specific fault detection and exclusion (FDE) schemes should be developed for different fault modes in visual navigation integrity monitoring. This work focuses on the development of a stochastic error model for the feature location. A stochastic error model is not only required for monitoring the nominal performance of the visual navigation methods, but can also give researchers a better understanding of the error sources in the vision measurements, so that the fault modes can be defined appropriately. Meanwhile, the characterization of the error in the extracted feature locations is however one of the largest difficulties towards vision integrity monitoring. In feature-based visual navigation methods, the coordinates of the 2D features are used as sensor measurements. However, the coordinates are indirect measurements. For camera sensors, the raw measurements are image pixel intensity values. The measurement noise of pixel intensities is normally referred to as photometric error nI , which is modeled as a zero-mean Gaussian distribution with covariance σnI . Fig. 1 illustrates the photometric noise and its impact on the feature extraction using a simple instance. Fig. 1a shows a noise-free chessboard image, and the blue ”+” marker denotes the ground truth position of the corner point. In Fig. 1b, the existence of photometric noise results in the slight variation of the black and white colors. Consequently, the corner location extracted by a feature detector, which is indicated by the red ”+” marker, is also erroneous compared to the ground truth. The error of the estimated feature location is referred to as geometric error. There are several challenges towards a general stochastic geometric error model. First, the distribution of feature points is not homogeneous, i.e., different feature points may follow distinct distributions. The lighting condition determines the intensity values, and the feature type as well as the view point influences the geometric distribution of the intensities around the feature. Either impact results in changes of the geometric error distribution. Moreover, since the feature extraction algorithms normally contain complicated and heuristic operations, describing the error transformation from photometric error in intensity domain to geometric error in feature location domain is rather challenging. In addition, the physical optical systems also have impacts on the measurement images. There is optical blur due to effects like diffraction and diffusion when the light rays pass through the lenses, which is normally described by a Gaussian point spread function (PSF) [4]. As a result, the distribution of the feature location error is dependent on the camera and lens applied. Due to the diversity in the geometric error distribution, it is not reasonable to simply build statistics from huge amount of data to derive a homogeneous distribution as the error model. Figure 2: Optical blur effect in a measurement image Though the feature geometric error distributions are necessary in visual navigation, the aforementioned challenges are not well solved yet. In state-of-the-art visual navigation methods such as ORB-SLAM [5], the geometric error covariance is normally chosen by using some heuristic values (e.g., set as 1 pixel in ORB-SLAM). This is unacceptable for integrity monitoring, since tuning the parameter for specific scenarios does not ensure that the model is valid if the visible scene changes. Reprojection error (feature location residual given the estimated pose) is widely used in visual navigation textbooks such as [6] to describe the feature error. However, the statistics of residuals is obviously not a proper error model, since the estimated states used for calculating the reprojection error can already be biased. Kumar and Osechas have shown in [7], and Edwards et al. have shown in [8] that for designed patterns, the feature location error follows a Gaussian distribution in nominal situations. Nevertheless, the results are still qualitative, since the variance of the distribution is still an ad-hoc value obtained from experiments for particular scenarios. In this work, we propose a subpixel-precision geometric error model of detected corners, named as CCGEM (Chessboard Corner Geometric Error Model). CCGEM targets at a specific type of corners (chessboard-like ’X’-junctions), which can either be from a designed landmark or be extracted natural features. It models the stochastic geometric error as a function of a few local parameters of the measurement image, which varies as the lighting or the visible scene changes. The parameters can be extracted from the local image patches around the corners with affordable complexity. Some coefficients in the model are dependent on the exploited optical instrument. These coefficients can be obtained through a calibration process for each distinct camera-lens combination, so that it is generalizable for different optical systems from end users. Therefore, CCGEM is a quantitative error model of the feature location which is generalizable for different optical systems and lighting conditions. In addition, conservative strategies are proposed in the coefficients fitting process for integrity demands.","PeriodicalId":381025,"journal":{"name":"Proceedings of the 32nd International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+ 2019)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 32nd International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+ 2019)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.33012/2019.16956","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7
Abstract
Camera-based visual navigation techniques can provide high precision infrastructure-less localization solutions using visual patterns, and play an important role in the environments where satellite navigation has significantly degraded performance in availability, accuracy, and integrity. However, the integrity monitoring of visual navigation methods is an essential but hardly-solved topic, since modelling the geometric error for cameras is rather challenging. This work proposes a highprecision geometric error model of detected feature corners for chessboard-like patterns. The model is named as Chessboard Corner Geometric Error Model (CCGEM). By applying the model to images containing chessboard-like patterns, the extracted corner location accuracy can be predicted in different lighting conditions. The coefficients in the model can be adapted to each distinct camera-lens system through a calibration process. The proposed method first models the intensity distribution in the local neighboring area of the extracted corner by taking the raw image as measurement input. Then, the geometric error of the feature location is modelled as a function of the distribution parameters. We show that the model fits the measurement error well in both simulated and real images. The proposed CCGEM also provides a conservative fitting model with risk probability information, which can be applied in the integrity monitoring of vision-based positioning. (a) Feature extraction without noise (b) Feature extraction with noise Figure 1: Photometric error and consequential geometric error in feature extraction INTRODUCTION Camera-based visual positioning has been widely investigated for autonomous landing of unmanned aerial vehicles (UAV) using a designed pattern as a landing pad. For instance, the approaches from Sharp et al. [1] and Cesetti et al. [2] have attracted great attentions of the research community. In addition, visual navigation techniques have huge potential in various applications, especially in environments such as urban areas where satellite navigation may have significantly degraded performance due to lacking of signal availability and multipath effects, e.g., as shown in the work from Narula et al. [3]. However, quantitative integrity monitoring of visual navigation is not yet a well-solved problem. Three basic components are essential for developing the visual navigation integrity. First of all, a feature location error model in nominal situations is required. Second, the dilution of precision (DOP) needs to be calculated to evaluate the geometric impact on the estimated position using cameras. Last but not least, specific fault detection and exclusion (FDE) schemes should be developed for different fault modes in visual navigation integrity monitoring. This work focuses on the development of a stochastic error model for the feature location. A stochastic error model is not only required for monitoring the nominal performance of the visual navigation methods, but can also give researchers a better understanding of the error sources in the vision measurements, so that the fault modes can be defined appropriately. Meanwhile, the characterization of the error in the extracted feature locations is however one of the largest difficulties towards vision integrity monitoring. In feature-based visual navigation methods, the coordinates of the 2D features are used as sensor measurements. However, the coordinates are indirect measurements. For camera sensors, the raw measurements are image pixel intensity values. The measurement noise of pixel intensities is normally referred to as photometric error nI , which is modeled as a zero-mean Gaussian distribution with covariance σnI . Fig. 1 illustrates the photometric noise and its impact on the feature extraction using a simple instance. Fig. 1a shows a noise-free chessboard image, and the blue ”+” marker denotes the ground truth position of the corner point. In Fig. 1b, the existence of photometric noise results in the slight variation of the black and white colors. Consequently, the corner location extracted by a feature detector, which is indicated by the red ”+” marker, is also erroneous compared to the ground truth. The error of the estimated feature location is referred to as geometric error. There are several challenges towards a general stochastic geometric error model. First, the distribution of feature points is not homogeneous, i.e., different feature points may follow distinct distributions. The lighting condition determines the intensity values, and the feature type as well as the view point influences the geometric distribution of the intensities around the feature. Either impact results in changes of the geometric error distribution. Moreover, since the feature extraction algorithms normally contain complicated and heuristic operations, describing the error transformation from photometric error in intensity domain to geometric error in feature location domain is rather challenging. In addition, the physical optical systems also have impacts on the measurement images. There is optical blur due to effects like diffraction and diffusion when the light rays pass through the lenses, which is normally described by a Gaussian point spread function (PSF) [4]. As a result, the distribution of the feature location error is dependent on the camera and lens applied. Due to the diversity in the geometric error distribution, it is not reasonable to simply build statistics from huge amount of data to derive a homogeneous distribution as the error model. Figure 2: Optical blur effect in a measurement image Though the feature geometric error distributions are necessary in visual navigation, the aforementioned challenges are not well solved yet. In state-of-the-art visual navigation methods such as ORB-SLAM [5], the geometric error covariance is normally chosen by using some heuristic values (e.g., set as 1 pixel in ORB-SLAM). This is unacceptable for integrity monitoring, since tuning the parameter for specific scenarios does not ensure that the model is valid if the visible scene changes. Reprojection error (feature location residual given the estimated pose) is widely used in visual navigation textbooks such as [6] to describe the feature error. However, the statistics of residuals is obviously not a proper error model, since the estimated states used for calculating the reprojection error can already be biased. Kumar and Osechas have shown in [7], and Edwards et al. have shown in [8] that for designed patterns, the feature location error follows a Gaussian distribution in nominal situations. Nevertheless, the results are still qualitative, since the variance of the distribution is still an ad-hoc value obtained from experiments for particular scenarios. In this work, we propose a subpixel-precision geometric error model of detected corners, named as CCGEM (Chessboard Corner Geometric Error Model). CCGEM targets at a specific type of corners (chessboard-like ’X’-junctions), which can either be from a designed landmark or be extracted natural features. It models the stochastic geometric error as a function of a few local parameters of the measurement image, which varies as the lighting or the visible scene changes. The parameters can be extracted from the local image patches around the corners with affordable complexity. Some coefficients in the model are dependent on the exploited optical instrument. These coefficients can be obtained through a calibration process for each distinct camera-lens combination, so that it is generalizable for different optical systems from end users. Therefore, CCGEM is a quantitative error model of the feature location which is generalizable for different optical systems and lighting conditions. In addition, conservative strategies are proposed in the coefficients fitting process for integrity demands.