The Photogrammetric Record最新文献

筛选
英文 中文
Linear target change detection from a single image based on three-dimensional real scene 基于三维真实场景的单幅图像线性目标变化检测
The Photogrammetric Record Pub Date : 2023-12-26 DOI: 10.1111/phor.12470
Yang Liu, Zheng Ji, Lingfeng Chen, Yuchen Liu
{"title":"Linear target change detection from a single image based on three-dimensional real scene","authors":"Yang Liu, Zheng Ji, Lingfeng Chen, Yuchen Liu","doi":"10.1111/phor.12470","DOIUrl":"https://doi.org/10.1111/phor.12470","url":null,"abstract":"Change detection is a critical component in the field of remote sensing, with significant implications for resource management and land monitoring. Currently, most conventional methods for remote sensing change detection often rely on qualitative monitoring, which usually requires data collection from the entire scene over multiple time periods. In this paper, we propose a method that can be computationally intensive and lacks reusability, especially when dealing with large datasets. We use a novel methodology that leverages the texture features and geometric structure information derived from three-dimensional (3D) real scenes. By establishing a two-dimensional (2D)–3D geometric relationship between a single observational image and the corresponding 3D scene, we can obtain more accurate positional information for the image. This relationship allows us to transfer the depth information from the 3D model to the observational image, thereby facilitating precise geometric change measurements for specific planar targets. Experimental results indicate that our approach enables millimetre-level change detection of minuscule targets based on a single image. Compared with conventional methods, our technique offers enhanced efficiency and reusability, making it a valuable tool for the fine-grained change detection of small targets based on 3D real scene.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139068045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rapid 3D modelling: Clustering method based on dynamic load balancing strategy 快速 3D 建模:基于动态负载平衡策略的聚类方法
The Photogrammetric Record Pub Date : 2023-12-11 DOI: 10.1111/phor.12473
Yingwei Ge, Bingxuan Guo, Guozheng Xu, Yawen Liu, Xiao Jiang, Zhe Peng
{"title":"Rapid 3D modelling: Clustering method based on dynamic load balancing strategy","authors":"Yingwei Ge, Bingxuan Guo, Guozheng Xu, Yawen Liu, Xiao Jiang, Zhe Peng","doi":"10.1111/phor.12473","DOIUrl":"https://doi.org/10.1111/phor.12473","url":null,"abstract":"Three-dimensional (3D) reconstruction is a pivotal research area within computer vision and photogrammetry, offering a valuable foundation of data for the development of smart cities. However, existing methods for constructing 3D models from unmanned aerial vehicle (UAV) images often suffer from slow processing speeds and low central processing unit (CPU)/graphics processing unit (GPU) utilization rates. Furthermore, the utilization of cluster-based distributed computing for 3D modelling frequently results in inefficient resource allocation across nodes. To address these challenges, this paper presents a novel approach to 3D modelling in clusters, incorporating a dynamic load-balancing strategy. The method divides the 3D reconstruction process into multiple stages to lay the groundwork for distributing tasks across multiple nodes efficiently. Instead of traditional traversal-based communication, this approach employs gossip communication techniques to reduce the network overhead. To boost the modelling efficiency, a dynamic load-balancing strategy is introduced that prevents nodes from becoming overloaded, thus optimizing resource usage during the modelling process and alleviating resource waste issues in multidevice clusters. The experimental results indicate that in small-scale data environments, this approach improves CPU/GPU utilization by 35.8%/23.4% compared with single-machine utilization. In large-scale data environments for cluster-based 3D modelling tests, this method enhances the average efficiency by 61.4% compared with traditional 3D modelling software while maintaining a comparable model accuracy. In computer vision and photogrammetry, research enhances 3D reconstruction for smart cities. To address slow UAV-based methods, the study employs dynamic load balancing and ‘gossip’ communication to minimize network overhead. In small data tests, the approach improves CPU and GPU efficiency by 20.7% and 40.3%, respectively. In large data settings, it outperforms existing methods by 61.38% while maintaining accuracy.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138629925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning point cloud context information based on 3D transformer for more accurate and efficient classification 基于 3D 变换器学习点云上下文信息,实现更准确、更高效的分类
The Photogrammetric Record Pub Date : 2023-12-10 DOI: 10.1111/phor.12469
Yiping Chen, Shuai Zhang, Weisheng Lin, Shuhang Zhang, Wuming Zhang
{"title":"Learning point cloud context information based on 3D transformer for more accurate and efficient classification","authors":"Yiping Chen, Shuai Zhang, Weisheng Lin, Shuhang Zhang, Wuming Zhang","doi":"10.1111/phor.12469","DOIUrl":"https://doi.org/10.1111/phor.12469","url":null,"abstract":"The point cloud semantic understanding task has made remarkable progress along with the development of 3D deep learning. However, aggregating spatial information to improve the local feature learning capability of the network remains a major challenge. Many methods have been used for improving local information learning, such as constructing a multi-area structure for capturing different area information. However, it will lose some local information due to the independent learning point feature. To solve this problem, a new network is proposed that considers the importance of the differences between points in the neighbourhood. Capturing local feature information can be enhanced by highlighting the different feature importance of the point cloud in the neighbourhood. First, T-Net is constructed to learn the point cloud transformation matrix for point cloud disorder. Second, transformer is used to improve the problem of local information loss due to the independence of each point in the neighbourhood. The experimental results show that 92.2% accuracy overall was achieved on the ModelNet40 dataset and 93.8% accuracy overall was achieved on the ModelNet10 dataset.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138566670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Weakly supervised semantic segmentation of mobile laser scanning point clouds via category balanced random annotation and deep consistency-guided self-distillation mechanism 基于类别平衡随机标注和深度一致性引导的移动激光扫描点云弱监督语义分割
The Photogrammetric Record Pub Date : 2023-12-01 DOI: 10.1111/phor.12468
Jiacheng Liu, Haiyan Guan, Xiangda Lei, Yongtao Yu
{"title":"Weakly supervised semantic segmentation of mobile laser scanning point clouds via category balanced random annotation and deep consistency-guided self-distillation mechanism","authors":"Jiacheng Liu, Haiyan Guan, Xiangda Lei, Yongtao Yu","doi":"10.1111/phor.12468","DOIUrl":"https://doi.org/10.1111/phor.12468","url":null,"abstract":"Scene understanding of mobile laser scanning (MLS) point clouds is vital in autonomous driving and virtual reality. Most existing semantic segmentation methods rely on a large number of accurately labelled points, which is time-consuming and labour-intensive. To cope with this issue, this paper explores a weakly supervised learning (WSL) framework for MLS data. Specifically, a category balanced random annotation (CBRA) strategy is employed to obtain balanced labels and enhance model performance. Next, based on KPConv-Net as a backbone network, a WSL semantic segmentation framework is developed for MLS point clouds via a deep consistency-guided self-distillation (DCS) mechanism. The DCS mechanism consists of a deep consistency-guided self-distillation branch and an entropy regularisation branch. The self-distillation branch is designed by constructing an auxiliary network to maintain the consistency of predicted distributions between the auxiliary network and the original network, while the entropy regularisation branch is designed to increase the confidence of the network predicted results. The proposed WSL framework was evaluated on the WHU-MLS, NPM3D and Toronto3D datasets. By using only 0.1% labelled points, the proposed WSL framework achieved a competitive performance in MLS point cloud semantic segmentation with the mean Intersection over Union (mIoU) scores of 60.08%, 72.0% and 67.42% on the three datasets, respectively.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138506930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The impact of oblique images and flight‐planning scenarios on the accuracy of UAV 3D mapping 倾斜图像和飞行规划场景对无人机3D制图精度的影响
The Photogrammetric Record Pub Date : 2023-10-09 DOI: 10.1111/phor.12466
Ebadat Ghanbari Parmehr, Mohammad Savadkouhi, Meghdad Nopour
{"title":"The impact of oblique images and flight‐planning scenarios on the accuracy of UAV 3D mapping","authors":"Ebadat Ghanbari Parmehr, Mohammad Savadkouhi, Meghdad Nopour","doi":"10.1111/phor.12466","DOIUrl":"https://doi.org/10.1111/phor.12466","url":null,"abstract":"Abstract The developments in lightweight unmanned aerial vehicles (UAVs) and structure‐from‐motion (SfM)‐based software have opened a new era in 3D mapping which is notably cost‐effective and fast, though the photogrammetric blocks lead to systematic height error due to inaccurate camera calibration parameters particularly when the ground control points (GCPs) are few and unevenly distributed. The use of onboard Global Navigation Satellite System (GNSS) receivers (such as RTK‐ or PPK‐based devices which use the DGNSS technique) to obtain the accurate coordinates of camera perspective centres has reduced the need for ground surveys, nevertheless, the aforementioned systematic error was reported in the UAV photogrammetric blocks. In this research, three flight‐planning scenarios with oblique imagery in addition to the traditional nadir block were evaluated and processed with four different processing cases. Therefore, 16 various blocks with different overlaps, direct and indirect georeferencing approaches as well as flight‐planning scenarios were tested to examine and offer the best imaging network. The results denote that the combination of oblique images located on a circle in the centre of the block with the nadir block provides the best self‐calibration functionality and improves the final accuracy by 50% (from 0.163 to 0.085 m) for direct georeferenced blocks and by 40% (from 0.042 to 0.026 m) for indirect georeferenced blocks.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135147216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
High‐resolution optical remote sensing image change detection based on dense connection and attention feature fusion network 基于密集连接和关注特征融合网络的高分辨率光学遥感图像变化检测
The Photogrammetric Record Pub Date : 2023-09-27 DOI: 10.1111/phor.12462
Daifeng Peng, Chenchen Zhai, Yongjun Zhang, Haiyan Guan
{"title":"High‐resolution optical remote sensing image change detection based on dense connection and attention feature fusion network","authors":"Daifeng Peng, Chenchen Zhai, Yongjun Zhang, Haiyan Guan","doi":"10.1111/phor.12462","DOIUrl":"https://doi.org/10.1111/phor.12462","url":null,"abstract":"Abstract The detection of ground object changes from bi‐temporal images is of great significance for urban planning, land‐use/land‐cover monitoring and natural disaster assessment. To solve the limitation of incomplete change detection (CD) entities and inaccurate edges caused by the loss of detailed information, this paper proposes a network based on dense connections and attention feature fusion, namely Siamese NestedUNet with Attention Feature Fusion (SNAFF). First, multi‐level bi‐temporal features are extracted through a Siamese network. The dense connections between the sub‐nodes of the decoder are used to compensate for the missing location information as well as weakening the semantic differences between features. Then, the attention mechanism is introduced to combine global and local information to achieve feature fusion. Finally, a deep supervision strategy is used to suppress the problem of gradient vanishing and slow convergence speed. During the testing phase, the test time augmentation (TTA) strategy is adopted to further improve the CD performance. In order to verify the effectiveness of the proposed method, two datasets with different change types are used. The experimental results indicate that, compared with the comparison methods, the proposed SNAFF achieves the best quantitative results on both datasets, in which F1, IoU and OA in the LEVIR‐CD dataset are 91.47%, 84.28% and 99.13%, respectively, and the values in the CDD dataset are 96.91%, 94.01% and 99.27%, respectively. In addition, the qualitative results show that SNAFF can effectively retain the global and edge information of the detected entity, thus achieving the best visual performance.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135538365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Weak texture remote sensing image matching based on hybrid domain features and adaptive description method 基于混合域特征和自适应描述方法的弱纹理遥感图像匹配
The Photogrammetric Record Pub Date : 2023-09-26 DOI: 10.1111/phor.12464
Wupeng Yang, Yongxiang Yao, Yongjun Zhang, Yi Wan
{"title":"Weak texture remote sensing image matching based on hybrid domain features and adaptive description method","authors":"Wupeng Yang, Yongxiang Yao, Yongjun Zhang, Yi Wan","doi":"10.1111/phor.12464","DOIUrl":"https://doi.org/10.1111/phor.12464","url":null,"abstract":"Abstract Weak texture remote sensing image (WTRSI) has characteristics such as low reflectivity, high similarity of neighbouring pixels and insignificant differences between regions. These factors cause difficulties in feature extraction and description, which lead to unsuccessful matching. Therefore, this paper proposes a novel hybrid‐domain features and adaptive description (HFAD) approach to perform WTRSI matching. This approach mainly provides two contributions: (1) a new feature extractor that combines both the spatial domain scale space and the frequency domain scale space is established, where a weighted least square filter combined with a phase consistency filter is used to establish the frequency domain scale space; and (2) a new log‐polar descriptor of adaptive neighbourhood (LDAN) is established, where the neighbourhood window size of each descriptor is calculated according to the log‐normalised intensity value of feature points. This article prepares some remote sensing images under weak texture scenes which include deserts, dense forests, waters, ice and snow, and shadows. The data set contains 50 typical image pairs, on which the proposed HFAD was demonstrated and compared with state‐of‐the‐art matching algorithms (RIFT, HOWP, KAZE, POS‐SIFT and SIFT). The statistical results of the comparative experiment show that the HFAD can achieve the accuracy of matching within two pixels and confirm that the proposed algorithm is robust and effective.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135719333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Floor plan creation using a low‐cost 360° camera 使用低成本的360°摄像机创建平面图
The Photogrammetric Record Pub Date : 2023-09-25 DOI: 10.1111/phor.12463
Jakub Vynikal, David Zahradník
{"title":"Floor plan creation using a low‐cost 360° camera","authors":"Jakub Vynikal, David Zahradník","doi":"10.1111/phor.12463","DOIUrl":"https://doi.org/10.1111/phor.12463","url":null,"abstract":"Abstract The creation of a 2D floor plan is an integral part of finishing a building construction. Legal obligations in different states often include submitting a precise floor plan for ownership purposes, as the building needs to be divided between new residents with reasonable precision. Common practice for floor plan generation includes manual measurements (tape or laser) and laser scanning (static or SLAM). In this paper, a novel approach is proposed using spherical photogrammetry, which is becoming increasingly popular due to its versatility, low cost and unexplored possibilities. Workflow is also noticeably faster than other methods, as video acquisition is rapid, on a par with SLAM. The accuracy and reliability of the measurements are then experimentally verified, comparing the results with established methods.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135815731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
2023 International Conference on Metrology for Archaeology and Cultural Heritage 2023考古与文化遗产计量国际会议
The Photogrammetric Record Pub Date : 2023-09-01 DOI: 10.1111/phor.3_12458
{"title":"2023 International Conference on Metrology for Archaeology and Cultural Heritage","authors":"","doi":"10.1111/phor.3_12458","DOIUrl":"https://doi.org/10.1111/phor.3_12458","url":null,"abstract":"The Photogrammetric RecordVolume 38, Issue 183 p. 451-452 NOTES 2023 International Conference on Metrology for Archaeology and Cultural Heritage First published: 28 September 2023 https://doi.org/10.1111/phor.3_12458Read the full textAboutPDF ToolsRequest permissionExport citationAdd to favoritesTrack citation ShareShare Give accessShare full text accessShare full-text accessPlease review our Terms and Conditions of Use and check box below to share full-text version of article.I have read and accept the Wiley Online Library Terms and Conditions of UseShareable LinkUse the link below to share a full-text version of this article with your friends and colleagues. Learn more.Copy URL Share a linkShare onEmailFacebookTwitterLinkedInRedditWechat No abstract is available for this article. Volume38, Issue183September 2023Pages 451-452 RelatedInformation","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135587990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3D Computer Vision and Photogrammetry 3D计算机视觉与摄影测量
The Photogrammetric Record Pub Date : 2023-09-01 DOI: 10.1111/phor.12458
{"title":"3D Computer Vision and Photogrammetry","authors":"","doi":"10.1111/phor.12458","DOIUrl":"https://doi.org/10.1111/phor.12458","url":null,"abstract":"The Photogrammetric RecordVolume 38, Issue 183 p. 450-450 NOTES 3D Computer Vision and Photogrammetry First published: 28 September 2023 https://doi.org/10.1111/phor.12458Read the full textAboutPDF ToolsRequest permissionExport citationAdd to favoritesTrack citation ShareShare Give accessShare full text accessShare full-text accessPlease review our Terms and Conditions of Use and check box below to share full-text version of article.I have read and accept the Wiley Online Library Terms and Conditions of UseShareable LinkUse the link below to share a full-text version of this article with your friends and colleagues. Learn more.Copy URL Share a linkShare onEmailFacebookTwitterLinkedInRedditWechat No abstract is available for this article. Volume38, Issue183September 2023Pages 450-450 RelatedInformation","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135588264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信