C. Okolie, J. Mills, A. Adeleke, J. Smit, I. Maduako
{"title":"用于数字高程模型误差预测的梯度决策树的可解释性","authors":"C. Okolie, J. Mills, A. Adeleke, J. Smit, I. Maduako","doi":"10.5194/isprs-archives-xlviii-m-3-2023-161-2023","DOIUrl":null,"url":null,"abstract":"Abstract. Gradient boosted decision trees (GBDTs) have repeatedly outperformed several machine learning and deep learning algorithms in competitive data science. However, the explainability of GBDT predictions especially with earth observation data is still an open issue requiring more focus by researchers. In this study, we investigate the explainability of Bayesian-optimised GBDT algorithms for modelling and prediction of the vertical error in Copernicus GLO-30 digital elevation model (DEM). Three GBDT algorithms are investigated (extreme gradient boosting - XGBoost, light boosting machine – LightGBM, and categorical boosting – CatBoost), and SHapley Additive exPlanations (SHAP) are adopted for the explainability analysis. The assessment sites are selected from urban/industrial and mountainous landscapes in Cape Town, South Africa. Training datasets are comprised of eleven predictor variables which are known influencers of elevation error: elevation, slope, aspect, surface roughness, topographic position index, terrain ruggedness index, terrain surface texture, vector roughness measure, forest cover, bare ground cover, and urban footprints. The target variable (elevation error) was calculated with respect to accurate airborne LiDAR. After model training and testing, the GBDTs were applied for predicting the elevation error at model implementation sites. The SHAP plots showed varying levels of emphasis on the parameters depending on the land cover and terrain. For example, in the urban area, the influence of vector ruggedness measure surpassed that of first-order derivatives such as slope and aspect. Thus, it is recommended that machine learning modelling procedures and workflows incorporate model explainability to ensure robust interpretation and understanding of model predictions by both technical and non-technical users.\n","PeriodicalId":30634,"journal":{"name":"The International Archives of the Photogrammetry Remote Sensing and Spatial Information Sciences","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"THE EXPLAINABILITY OF GRADIENT-BOOSTED DECISION TREES FOR DIGITAL ELEVATION MODEL (DEM) ERROR PREDICTION\",\"authors\":\"C. Okolie, J. Mills, A. Adeleke, J. Smit, I. Maduako\",\"doi\":\"10.5194/isprs-archives-xlviii-m-3-2023-161-2023\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Abstract. Gradient boosted decision trees (GBDTs) have repeatedly outperformed several machine learning and deep learning algorithms in competitive data science. However, the explainability of GBDT predictions especially with earth observation data is still an open issue requiring more focus by researchers. In this study, we investigate the explainability of Bayesian-optimised GBDT algorithms for modelling and prediction of the vertical error in Copernicus GLO-30 digital elevation model (DEM). Three GBDT algorithms are investigated (extreme gradient boosting - XGBoost, light boosting machine – LightGBM, and categorical boosting – CatBoost), and SHapley Additive exPlanations (SHAP) are adopted for the explainability analysis. The assessment sites are selected from urban/industrial and mountainous landscapes in Cape Town, South Africa. Training datasets are comprised of eleven predictor variables which are known influencers of elevation error: elevation, slope, aspect, surface roughness, topographic position index, terrain ruggedness index, terrain surface texture, vector roughness measure, forest cover, bare ground cover, and urban footprints. The target variable (elevation error) was calculated with respect to accurate airborne LiDAR. After model training and testing, the GBDTs were applied for predicting the elevation error at model implementation sites. The SHAP plots showed varying levels of emphasis on the parameters depending on the land cover and terrain. For example, in the urban area, the influence of vector ruggedness measure surpassed that of first-order derivatives such as slope and aspect. Thus, it is recommended that machine learning modelling procedures and workflows incorporate model explainability to ensure robust interpretation and understanding of model predictions by both technical and non-technical users.\\n\",\"PeriodicalId\":30634,\"journal\":{\"name\":\"The International Archives of the Photogrammetry Remote Sensing and Spatial Information Sciences\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-09-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"The International Archives of the Photogrammetry Remote Sensing and Spatial Information Sciences\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.5194/isprs-archives-xlviii-m-3-2023-161-2023\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"Social Sciences\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"The International Archives of the Photogrammetry Remote Sensing and Spatial Information Sciences","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5194/isprs-archives-xlviii-m-3-2023-161-2023","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"Social Sciences","Score":null,"Total":0}
THE EXPLAINABILITY OF GRADIENT-BOOSTED DECISION TREES FOR DIGITAL ELEVATION MODEL (DEM) ERROR PREDICTION
Abstract. Gradient boosted decision trees (GBDTs) have repeatedly outperformed several machine learning and deep learning algorithms in competitive data science. However, the explainability of GBDT predictions especially with earth observation data is still an open issue requiring more focus by researchers. In this study, we investigate the explainability of Bayesian-optimised GBDT algorithms for modelling and prediction of the vertical error in Copernicus GLO-30 digital elevation model (DEM). Three GBDT algorithms are investigated (extreme gradient boosting - XGBoost, light boosting machine – LightGBM, and categorical boosting – CatBoost), and SHapley Additive exPlanations (SHAP) are adopted for the explainability analysis. The assessment sites are selected from urban/industrial and mountainous landscapes in Cape Town, South Africa. Training datasets are comprised of eleven predictor variables which are known influencers of elevation error: elevation, slope, aspect, surface roughness, topographic position index, terrain ruggedness index, terrain surface texture, vector roughness measure, forest cover, bare ground cover, and urban footprints. The target variable (elevation error) was calculated with respect to accurate airborne LiDAR. After model training and testing, the GBDTs were applied for predicting the elevation error at model implementation sites. The SHAP plots showed varying levels of emphasis on the parameters depending on the land cover and terrain. For example, in the urban area, the influence of vector ruggedness measure surpassed that of first-order derivatives such as slope and aspect. Thus, it is recommended that machine learning modelling procedures and workflows incorporate model explainability to ensure robust interpretation and understanding of model predictions by both technical and non-technical users.