The Photogrammetric Record最新文献

筛选
英文 中文
Adaptive region aggregation for multi‐view stereo matching using deformable convolutional networks 基于可变形卷积网络的多视点立体匹配自适应区域聚合
The Photogrammetric Record Pub Date : 2023-08-21 DOI: 10.1111/phor.12459
Han Hu, Liupeng Su, Shunfu Mao, Min Chen, Guoqiang Pan, Bo Xu, Qing Zhu
{"title":"Adaptive region aggregation for multi‐view stereo matching using deformable convolutional networks","authors":"Han Hu, Liupeng Su, Shunfu Mao, Min Chen, Guoqiang Pan, Bo Xu, Qing Zhu","doi":"10.1111/phor.12459","DOIUrl":"https://doi.org/10.1111/phor.12459","url":null,"abstract":"Deep‐learning methods have demonstrated promising performance in multi‐view stereo (MVS) applications. However, it remains challenging to apply a geometrical prior on the adaptive matching windows to achieve efficient three‐dimensional reconstruction. To address this problem, this paper proposes a learnable adaptive region aggregation method based on deformable convolutional networks (DCNs), which is integrated into the feature extraction workflow for MVSNet method that uses coarse‐to‐fine structure. Following the conventional pipeline of MVSNet, a DCN is used to densely estimate and apply transformations in our feature extractor, which is composed of a deformable feature pyramid network (DFPN). Furthermore, we introduce a dedicated offset regulariser to promote the convergence of the learnable offsets of the DCN. The effectiveness of the proposed DFPN is validated through quantitative and qualitative evaluations on the BlendedMVS and Tanks and Temples benchmark datasets within a cross‐dataset evaluation setting.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"16 1","pages":"430 - 449"},"PeriodicalIF":0.0,"publicationDate":"2023-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81631377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A survey on conventional and learning‐based methods for multi‐view stereo 多视点立体视觉的传统方法和基于学习的方法综述
The Photogrammetric Record Pub Date : 2023-08-13 DOI: 10.1111/phor.12456
Elisavet (Ellie) Konstantina Stathopoulou, F. Remondino
{"title":"A survey on conventional and learning‐based methods for multi‐view stereo","authors":"Elisavet (Ellie) Konstantina Stathopoulou, F. Remondino","doi":"10.1111/phor.12456","DOIUrl":"https://doi.org/10.1111/phor.12456","url":null,"abstract":"3D reconstruction of scenes using multiple images, relying on robust correspondence search and depth estimation, has been thoroughly studied for the two‐view and multi‐view scenarios in recent years. Multi‐view stereo (MVS) algorithms aim to generate a rich, dense 3D model of the scene in the form of a dense point cloud or a triangulated mesh. In a typical MVS pipeline, the robust estimations for the camera poses along with the sparse points obtained from structure from motion (SfM) are used as input. During this process, the depth of generally every pixel of the scene is to be calculated. Several methods, either conventional or, more recently, learning‐based have been developed for solving the correspondence search problem. A vast amount of research exists in the literature using local, global or semi‐global stereomatching approaches, with the PatchMatch algorithm being among the most popular and efficient conventional ones in the last decade. Yet, and despite the widespread evolution of the algorithms, yielding complete, accurate and aesthetically pleasing 3D representations of a scene remains an open issue in real‐world and large‐scale photogrammetric applications. This work aims to provide a concrete survey on the most widely used MVS methods, investigating underlying concepts and challenges. To this end, the theoretical background and relative literature are discussed for both conventional and learning‐based approaches, with a particular focus on close‐range 3D reconstruction applications.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"96 1","pages":"374 - 407"},"PeriodicalIF":0.0,"publicationDate":"2023-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83363640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic calibration of terrestrial laser scanners using intensity features 利用强度特征自动校准地面激光扫描仪
The Photogrammetric Record Pub Date : 2023-07-18 DOI: 10.1111/phor.12454
Jing Qiao, Tomislav Medic, Andreas Baumann-Ouyang
{"title":"Automatic calibration of terrestrial laser scanners using intensity features","authors":"Jing Qiao, Tomislav Medic, Andreas Baumann-Ouyang","doi":"10.1111/phor.12454","DOIUrl":"https://doi.org/10.1111/phor.12454","url":null,"abstract":"We propose an in situ self‐calibration method by detecting and matching intensity features on the local planes in overlapping point clouds based on the Förstner operator. We successfully matched the intensity features from scans at different locations by feature matching on common local planes rather than on the rasterised grids of the horizontal and vertical angles adopted by the affirmed keypoint‐based algorithm. The capability of extracting features from different stations offers the possibility of comprehensive scanner calibration, solving the disadvantage that the existing keypoint‐based methods can only estimate the two‐face‐sensitive model parameters. The proposed algorithm has been tested with a high‐precision panoramic scanner, Leica RTC360, using datasets from a calibration hall and a general working scenario. It has been shown that the proposed approach consistently calibrates the two‐face‐sensitive model parameters with the affirmed keypoint‐based one. For the case of comprehensive calibration with the offset estimated and some angular parameters separated where the previous keypoint‐based one failed, the proposed algorithm achieves an accuracy of 0.16 mm, 2.7″ and 2.1″ in range, azimuth and elevation for the estimated target centres. The proposed algorithm can accurately calibrate two‐face‐sensitive and more comprehensive model parameters without any preparation on‐site, for example, mounting targets.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"32 5","pages":"320 - 338"},"PeriodicalIF":0.0,"publicationDate":"2023-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91489037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real‐time mosaic of multiple fisheye surveillance videos based on geo‐registration and rectification 基于地理配准和校正的多鱼眼监控视频实时拼接
The Photogrammetric Record Pub Date : 2023-07-18 DOI: 10.1111/phor.12455
Jiongli Gao, Jun Wu, Mingyi Huang, Gang Xu
{"title":"Real‐time mosaic of multiple fisheye surveillance videos based on geo‐registration and rectification","authors":"Jiongli Gao, Jun Wu, Mingyi Huang, Gang Xu","doi":"10.1111/phor.12455","DOIUrl":"https://doi.org/10.1111/phor.12455","url":null,"abstract":"A distributed fisheye video surveillance system (DFVSS) can monitor a wide area without blind spots, but it is often affected by the viewpoint discontinuity and space inconsistency of multiple videos in the area. This paper proposes a novel real‐time fisheye video mosaic algorithm for wide‐area surveillance. First, by extending the line photogrammetry theory under central projection to spherical projection, a fisheye video geo‐registration model is established and estimated using orthogonal parallel lines on the ground, so that all videos of DFVSS are in the unified reference system to eliminate the space inconsistency between them. Second, by combining the photogrammetry orthorectification technique with thin‐plate spline transformation, a fisheye video rectification model is established to eliminate serious distortion in geo‐registered fisheye videos and align them accurately. Third, the viewport‐dependent video selection strategy and video look‐up table computation technique are adopted to create a high‐resolution panorama from input fisheye videos in real time. A parking lot of about 0.4 km2 monitored by eight fisheye cameras was selected as the test area. The experimental result shows the line re‐projection error in fisheye videos is about 0.5 pixels, and the overall efficiency, including panorama creation and mapping to the ground as texture, is not <30 fps. It indicates that the proposed algorithm can achieve a good balance between the limitation of video transmission bandwidth and the smooth observation requirement of computer equipment for the panorama, which is of great value for the construction and application of DFVSS.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"1 1","pages":"339 - 373"},"PeriodicalIF":0.0,"publicationDate":"2023-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85449142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning‐based encoded target detection on iteratively orthorectified images for accurate fisheye calibration 基于学习的编码目标检测,用于迭代正校正图像的精确鱼眼校准
The Photogrammetric Record Pub Date : 2023-06-19 DOI: 10.1111/phor.12453
Haonan Dong, Jian Yao, Ye Gong, Li Li, Shaosheng Cao, Yuxuan Li
{"title":"Learning‐based encoded target detection on iteratively orthorectified images for accurate fisheye calibration","authors":"Haonan Dong, Jian Yao, Ye Gong, Li Li, Shaosheng Cao, Yuxuan Li","doi":"10.1111/phor.12453","DOIUrl":"https://doi.org/10.1111/phor.12453","url":null,"abstract":"Fisheye camera calibration is an essential task in photogrammetry. However, previous calibration patterns and the robustness of the adjoint processing methods are limited due to the fisheye distortion and various lighting. This problem leads to additional manual intervention in the data collection. Moreover, it is arduous to accurately detect the board target under fisheye's distortion. To increase the robustness in this task, we present a novel encoded board “Meta‐Board” and a learning‐based target detection method. Additionally, an automatic image orthorectification is integrated to alleviate the distortion effect on the target iteratively until convergence. A low‐cost control field with the proposed boards is built for the experiment. Results on both virtual and real camera lenses and multi‐camera rigs show that our method can be robustly used in calibrating the fisheye camera and reaches state‐of‐the‐art accuracy.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"156 1","pages":"297 - 319"},"PeriodicalIF":0.0,"publicationDate":"2023-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77381646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring the performance of spectral and textural information for leaf area index estimation with homogeneous and heterogeneous surfaces 探索光谱和纹理信息在均匀和非均匀表面叶面积指数估计中的性能
The Photogrammetric Record Pub Date : 2023-06-04 DOI: 10.1111/phor.12450
Yangyang Zhang, Xu Han, Jian Yang
{"title":"Exploring the performance of spectral and textural information for leaf area index estimation with homogeneous and heterogeneous surfaces","authors":"Yangyang Zhang, Xu Han, Jian Yang","doi":"10.1111/phor.12450","DOIUrl":"https://doi.org/10.1111/phor.12450","url":null,"abstract":"Leaf area index (LAI) is one of the key parameters of vegetation structure, which can be applied in monitoring vegetation growth status. Currently, abundant spatial information (e.g., textural information), provided by the developing remote sensing satellite techniques, could boost the accuracy of LAI estimation. Thus, the performance of spectral and textural information must be evaluated for different vegetation types of LAI estimation in different surface types. In this study, different spectral vegetation indices (SVIs) and grey‐level co‐occurrence matrix‐based textural variables under different moving window sizes were extracted from Landsat TM satellite data. First, the ability of different types of SVIs for LAI estimation in different surface types was analysed. Subsequently, the effect of different texture variables with different moving window sizes towards LAI estimation accuracy in different vegetation types was explored. Lastly, the performance of SVIs combined with textural information for the LAI estimation in different vegetation types was evaluated. Results indicated that SVIs performed better for LAI estimation in the homogeneous region than that in the heterogeneous region, and difference vegetation index was more remarkable for LAI estimation in different vegetation types than other SVIs. In addition, variations in texture variables and moving window sizes had a large influence on LAI estimation of natural vegetation with high canopy heterogeneity. SVI combined with textural information can efficiently improve the accuracy of LAI estimation in different vegetation types (R2 = 0.672, 0.455 and 0.523 for meadow, shrub and cantaloupe, respectively.) compared with SVI alone (R2 = 0.189, 0.064 and 0.431 for meadow, shrub and cantaloupe, respectively.). Especially for natural vegetation (meadow, shrub), the addition of textural information can greatly improve the accuracy of LAI estimation.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"1 1","pages":"233 - 251"},"PeriodicalIF":0.0,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83710884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
View‐graph key‐subset extraction for efficient and robust structure from motion 从运动中提取有效且稳健的视图关键子集
The Photogrammetric Record Pub Date : 2023-06-04 DOI: 10.1111/phor.12451
Ye Gong, Pengwei Zhou, Yu‐ye Liu, Haonan Dong, Li Li, Jian Yao
{"title":"View‐graph key‐subset extraction for efficient and robust structure from motion","authors":"Ye Gong, Pengwei Zhou, Yu‐ye Liu, Haonan Dong, Li Li, Jian Yao","doi":"10.1111/phor.12451","DOIUrl":"https://doi.org/10.1111/phor.12451","url":null,"abstract":"Structure from motion (SfM) is used to recover camera poses and the sparse structure of real scenes from multiview images. SfM methods construct a view‐graph from the matching relationships of images. Redundancy and incorrect edges are usually observed in it. Redundancy inhibits the efficiency and incorrect edges result in the misalignment of structures. In addition, the uneven distribution of vertices usually affects the global accuracy. To address these problems, we propose a coarse‐to‐fine approach in which the poses of an extracted key‐subset of images are first computed and then all remaining images are oriented. The core of this approach is view‐graph key‐subset extraction, which not only prunes redundant data and incorrect edges but also obtains properly distributed key‐subset vertices. The extraction approach is based on a replaceability score and an iteration‐update strategy. In this way, only vertices with high SfM importance are preserved in the key‐subset. Different public datasets are used to evaluate our approach. Due to the absence of ground‐truth camera poses in large‐scale datasets, we present new datasets with accurate camera poses and point clouds. The results demonstrate that our approach greatly increases the efficiency of SfM. Furthermore, the robustness and accuracy can be improved.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"41 1","pages":"252 - 296"},"PeriodicalIF":0.0,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87132135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
2023 Asian Conference on Remote Sensing (ACRS) 2023年亚洲遥感会议
The Photogrammetric Record Pub Date : 2023-06-01 DOI: 10.1111/phor.10_12449
{"title":"2023 Asian Conference on Remote Sensing (ACRS)","authors":"","doi":"10.1111/phor.10_12449","DOIUrl":"https://doi.org/10.1111/phor.10_12449","url":null,"abstract":"The wide variety of sensors and systems available on the market for collecting spatial data makes the evaluation of provided information, calibration of sensors and benchmarking of systems a critical task. It is also an important scientific issue for many professionals. In daily work, the assessment of algorithms and sensors for collecting and generating spatial data resources is a crucial issue for academic institutions, research centres, national mapping and cadastral agencies, and all professionals handling geospatial data. The GEOBENCH workshop is therefore appropriate for those willing to extend their knowledge in the fields of photogrammetry and remote sensing – present evaluations of algorithms and sensors in the sector as well as new benchmarks. The workshop is a followup of the first successful event held in Warsaw, Poland, in 2019 and will be held in the AGH University of Science and Technology in Krakow, Poland, on 23– 24 October 2023.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83324682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
42nd EARSeL Symposium 2023 第42届EARSeL研讨会
The Photogrammetric Record Pub Date : 2023-06-01 DOI: 10.1111/phor.4_12449
{"title":"42nd EARSeL Symposium 2023","authors":"","doi":"10.1111/phor.4_12449","DOIUrl":"https://doi.org/10.1111/phor.4_12449","url":null,"abstract":"","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"123 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78564905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Academic Track of Foss4g (Free and Open Source Software for Geospatial) 2023 Foss4g (Free and Open Source Software for Geospatial)学术轨迹2023
The Photogrammetric Record Pub Date : 2023-06-01 DOI: 10.1111/phor.3_12449
{"title":"Academic Track of Foss4g (Free and Open Source Software for Geospatial) 2023","authors":"","doi":"10.1111/phor.3_12449","DOIUrl":"https://doi.org/10.1111/phor.3_12449","url":null,"abstract":"","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":"109 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88724565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信