D. Bolkas, Jeffrey Chiampi, J. Chapman, Vincent F. Pavill
{"title":"创建融合sUAS和TLS点云的虚拟现实环境","authors":"D. Bolkas, Jeffrey Chiampi, J. Chapman, Vincent F. Pavill","doi":"10.1080/19479832.2020.1716861","DOIUrl":null,"url":null,"abstract":"ABSTRACT In recent years, immersive virtual reality has been used in disciplines such as engineering, sciences, and education. Point-cloud technologies such as laser scanning and unmanned aerial systems have become important for creating virtual environments. This paper discusses creating virtual environments from 3D point-cloud data suitable for immersive and interactive virtual reality. Both laser scanning and sUAS point-clouds are utilised. These point-clouds are merged using a custom-made algorithm that identifies data gaps in the master dataset (laser scanner) and fills them with data from a slave dataset (sUAS) resulting in a more complete dataset that is used for terrain modelling and 3D modelling of objects. The terrain and 3D objects are then textured with custom-made and free textures to provide a sense of realism in the objects. The created virtual environment is a digital copy of a part of the Penn State Wilkes-Barre campus. This virtual environment will be used in immersive and interactive surveying laboratories to assess the role of immersive virtual reality in surveying engineering education.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"11 1","pages":"136 - 161"},"PeriodicalIF":1.8000,"publicationDate":"2020-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/19479832.2020.1716861","citationCount":"15","resultStr":"{\"title\":\"Creating a virtual reality environment with a fusion of sUAS and TLS point-clouds\",\"authors\":\"D. Bolkas, Jeffrey Chiampi, J. Chapman, Vincent F. Pavill\",\"doi\":\"10.1080/19479832.2020.1716861\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"ABSTRACT In recent years, immersive virtual reality has been used in disciplines such as engineering, sciences, and education. Point-cloud technologies such as laser scanning and unmanned aerial systems have become important for creating virtual environments. This paper discusses creating virtual environments from 3D point-cloud data suitable for immersive and interactive virtual reality. Both laser scanning and sUAS point-clouds are utilised. These point-clouds are merged using a custom-made algorithm that identifies data gaps in the master dataset (laser scanner) and fills them with data from a slave dataset (sUAS) resulting in a more complete dataset that is used for terrain modelling and 3D modelling of objects. The terrain and 3D objects are then textured with custom-made and free textures to provide a sense of realism in the objects. The created virtual environment is a digital copy of a part of the Penn State Wilkes-Barre campus. This virtual environment will be used in immersive and interactive surveying laboratories to assess the role of immersive virtual reality in surveying engineering education.\",\"PeriodicalId\":46012,\"journal\":{\"name\":\"International Journal of Image and Data Fusion\",\"volume\":\"11 1\",\"pages\":\"136 - 161\"},\"PeriodicalIF\":1.8000,\"publicationDate\":\"2020-01-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.1080/19479832.2020.1716861\",\"citationCount\":\"15\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Image and Data Fusion\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1080/19479832.2020.1716861\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"REMOTE SENSING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Image and Data Fusion","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/19479832.2020.1716861","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"REMOTE SENSING","Score":null,"Total":0}
Creating a virtual reality environment with a fusion of sUAS and TLS point-clouds
ABSTRACT In recent years, immersive virtual reality has been used in disciplines such as engineering, sciences, and education. Point-cloud technologies such as laser scanning and unmanned aerial systems have become important for creating virtual environments. This paper discusses creating virtual environments from 3D point-cloud data suitable for immersive and interactive virtual reality. Both laser scanning and sUAS point-clouds are utilised. These point-clouds are merged using a custom-made algorithm that identifies data gaps in the master dataset (laser scanner) and fills them with data from a slave dataset (sUAS) resulting in a more complete dataset that is used for terrain modelling and 3D modelling of objects. The terrain and 3D objects are then textured with custom-made and free textures to provide a sense of realism in the objects. The created virtual environment is a digital copy of a part of the Penn State Wilkes-Barre campus. This virtual environment will be used in immersive and interactive surveying laboratories to assess the role of immersive virtual reality in surveying engineering education.
期刊介绍:
International Journal of Image and Data Fusion provides a single source of information for all aspects of image and data fusion methodologies, developments, techniques and applications. Image and data fusion techniques are important for combining the many sources of satellite, airborne and ground based imaging systems, and integrating these with other related data sets for enhanced information extraction and decision making. Image and data fusion aims at the integration of multi-sensor, multi-temporal, multi-resolution and multi-platform image data, together with geospatial data, GIS, in-situ, and other statistical data sets for improved information extraction, as well as to increase the reliability of the information. This leads to more accurate information that provides for robust operational performance, i.e. increased confidence, reduced ambiguity and improved classification enabling evidence based management. The journal welcomes original research papers, review papers, shorter letters, technical articles, book reviews and conference reports in all areas of image and data fusion including, but not limited to, the following aspects and topics: • Automatic registration/geometric aspects of fusing images with different spatial, spectral, temporal resolutions; phase information; or acquired in different modes • Pixel, feature and decision level fusion algorithms and methodologies • Data Assimilation: fusing data with models • Multi-source classification and information extraction • Integration of satellite, airborne and terrestrial sensor systems • Fusing temporal data sets for change detection studies (e.g. for Land Cover/Land Use Change studies) • Image and data mining from multi-platform, multi-source, multi-scale, multi-temporal data sets (e.g. geometric information, topological information, statistical information, etc.).