{"title":"评估基于物理场景建模的Lidar/HSI直接方法","authors":"Ryan N. Givens, K. Walli, M. Eismann","doi":"10.1109/AIPR.2014.7041906","DOIUrl":null,"url":null,"abstract":"Recent work has been able to automate the process of generating three-dimensional, spectrally attributed scenes for use in physics-based modeling software using the Lidar/Hyperspectral Direct (LHD) method. The LHD method autonomously generates three-dimensional Digital Imaging and Remote Sensing Image Generation (DIRSIG) scenes from input high-resolution imagery, lidar data, and hyperspectral imagery and has been shown to do this successfully using both modeled and real datasets. While the output scenes look realistic and appear to match the input scenes under qualitative comparisons, a more quantitative approach is needed to evaluate the full utility of these autonomously generated scenes. This paper seeks to improve the evaluation of the spatial and spectral accuracy of autonomously generated three-dimensional scenes using the DIRSIG model. Two scenes are presented for this evaluation. The first is generated from a modeled dataset and the second is generated using data collected over a real-world site. DIRSIG-generated synthetic imagery over the recreated scenes are then compared to the original input imagery to evaluate how well the recreated scenes match the original scenes in spatial and spectral accuracy and to determine the ability of the recreated scenes to produce useful outputs for algorithm development.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Evaluating the Lidar/HSI direct method for physics-based scene modeling\",\"authors\":\"Ryan N. Givens, K. Walli, M. Eismann\",\"doi\":\"10.1109/AIPR.2014.7041906\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recent work has been able to automate the process of generating three-dimensional, spectrally attributed scenes for use in physics-based modeling software using the Lidar/Hyperspectral Direct (LHD) method. The LHD method autonomously generates three-dimensional Digital Imaging and Remote Sensing Image Generation (DIRSIG) scenes from input high-resolution imagery, lidar data, and hyperspectral imagery and has been shown to do this successfully using both modeled and real datasets. While the output scenes look realistic and appear to match the input scenes under qualitative comparisons, a more quantitative approach is needed to evaluate the full utility of these autonomously generated scenes. This paper seeks to improve the evaluation of the spatial and spectral accuracy of autonomously generated three-dimensional scenes using the DIRSIG model. Two scenes are presented for this evaluation. The first is generated from a modeled dataset and the second is generated using data collected over a real-world site. DIRSIG-generated synthetic imagery over the recreated scenes are then compared to the original input imagery to evaluate how well the recreated scenes match the original scenes in spatial and spectral accuracy and to determine the ability of the recreated scenes to produce useful outputs for algorithm development.\",\"PeriodicalId\":210982,\"journal\":{\"name\":\"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)\",\"volume\":\"64 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2014-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/AIPR.2014.7041906\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AIPR.2014.7041906","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Evaluating the Lidar/HSI direct method for physics-based scene modeling
Recent work has been able to automate the process of generating three-dimensional, spectrally attributed scenes for use in physics-based modeling software using the Lidar/Hyperspectral Direct (LHD) method. The LHD method autonomously generates three-dimensional Digital Imaging and Remote Sensing Image Generation (DIRSIG) scenes from input high-resolution imagery, lidar data, and hyperspectral imagery and has been shown to do this successfully using both modeled and real datasets. While the output scenes look realistic and appear to match the input scenes under qualitative comparisons, a more quantitative approach is needed to evaluate the full utility of these autonomously generated scenes. This paper seeks to improve the evaluation of the spatial and spectral accuracy of autonomously generated three-dimensional scenes using the DIRSIG model. Two scenes are presented for this evaluation. The first is generated from a modeled dataset and the second is generated using data collected over a real-world site. DIRSIG-generated synthetic imagery over the recreated scenes are then compared to the original input imagery to evaluate how well the recreated scenes match the original scenes in spatial and spectral accuracy and to determine the ability of the recreated scenes to produce useful outputs for algorithm development.