Min Chen, T. Fang, Qing Zhu, X. Ge, Zhanhao Zhang, Xin Zhang
{"title":"基于线段局部区域约束的地空图像特征点匹配","authors":"Min Chen, T. Fang, Qing Zhu, X. Ge, Zhanhao Zhang, Xin Zhang","doi":"10.14358/pers.21-00022r2","DOIUrl":null,"url":null,"abstract":"In this study, we propose a feature-point matching method that is robust to viewpoint, scale, and illumination changes between aerial and ground images, to improve matching performance. First, a 3D rendering strategy is adopted to synthesize ground-view images from the 3D mesh model\n reconstructed from aerial images and overcome the global geometric distortion between aerial and ground images. We do not directly match feature points between the synthesized and ground images, but extract line-segment correspondences by designing a line-segment matching method that can adapt\n to the local geometric deformation, holes, and blurred textures on the synthesized image. Then, on the basis of the line-segment matches, local-region correspondences are constructed, and local regions on the synthesized image are propagated back to the original aerial images. Lastly, feature-point\n matching is performed between the aerial and ground images with the constraints of the local-region correspondences. Experimental results demonstrate that the proposed method can obtain more correct matches and higher matching precision than state-of-the-art methods. Specifically, the proposed\n method increases the average number of correct matches and average matching precision of the second-best method by more than five times and 40%, respectively.","PeriodicalId":49702,"journal":{"name":"Photogrammetric Engineering and Remote Sensing","volume":"126 1","pages":""},"PeriodicalIF":1.0000,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Feature-Point Matching for Aerial and Ground Images by Exploiting Line Segment-Based Local-Region Constraints\",\"authors\":\"Min Chen, T. Fang, Qing Zhu, X. Ge, Zhanhao Zhang, Xin Zhang\",\"doi\":\"10.14358/pers.21-00022r2\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this study, we propose a feature-point matching method that is robust to viewpoint, scale, and illumination changes between aerial and ground images, to improve matching performance. First, a 3D rendering strategy is adopted to synthesize ground-view images from the 3D mesh model\\n reconstructed from aerial images and overcome the global geometric distortion between aerial and ground images. We do not directly match feature points between the synthesized and ground images, but extract line-segment correspondences by designing a line-segment matching method that can adapt\\n to the local geometric deformation, holes, and blurred textures on the synthesized image. Then, on the basis of the line-segment matches, local-region correspondences are constructed, and local regions on the synthesized image are propagated back to the original aerial images. Lastly, feature-point\\n matching is performed between the aerial and ground images with the constraints of the local-region correspondences. Experimental results demonstrate that the proposed method can obtain more correct matches and higher matching precision than state-of-the-art methods. Specifically, the proposed\\n method increases the average number of correct matches and average matching precision of the second-best method by more than five times and 40%, respectively.\",\"PeriodicalId\":49702,\"journal\":{\"name\":\"Photogrammetric Engineering and Remote Sensing\",\"volume\":\"126 1\",\"pages\":\"\"},\"PeriodicalIF\":1.0000,\"publicationDate\":\"2021-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Photogrammetric Engineering and Remote Sensing\",\"FirstCategoryId\":\"89\",\"ListUrlMain\":\"https://doi.org/10.14358/pers.21-00022r2\",\"RegionNum\":4,\"RegionCategory\":\"地球科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"GEOGRAPHY, PHYSICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Photogrammetric Engineering and Remote Sensing","FirstCategoryId":"89","ListUrlMain":"https://doi.org/10.14358/pers.21-00022r2","RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"GEOGRAPHY, PHYSICAL","Score":null,"Total":0}
Feature-Point Matching for Aerial and Ground Images by Exploiting Line Segment-Based Local-Region Constraints
In this study, we propose a feature-point matching method that is robust to viewpoint, scale, and illumination changes between aerial and ground images, to improve matching performance. First, a 3D rendering strategy is adopted to synthesize ground-view images from the 3D mesh model
reconstructed from aerial images and overcome the global geometric distortion between aerial and ground images. We do not directly match feature points between the synthesized and ground images, but extract line-segment correspondences by designing a line-segment matching method that can adapt
to the local geometric deformation, holes, and blurred textures on the synthesized image. Then, on the basis of the line-segment matches, local-region correspondences are constructed, and local regions on the synthesized image are propagated back to the original aerial images. Lastly, feature-point
matching is performed between the aerial and ground images with the constraints of the local-region correspondences. Experimental results demonstrate that the proposed method can obtain more correct matches and higher matching precision than state-of-the-art methods. Specifically, the proposed
method increases the average number of correct matches and average matching precision of the second-best method by more than five times and 40%, respectively.
期刊介绍:
Photogrammetric Engineering & Remote Sensing commonly referred to as PE&RS, is the official journal of imaging and geospatial information science and technology. Included in the journal on a regular basis are highlight articles such as the popular columns “Grids & Datums” and “Mapping Matters” and peer reviewed technical papers.
We publish thousands of documents, reports, codes, and informational articles in and about the industries relating to Geospatial Sciences, Remote Sensing, Photogrammetry and other imaging sciences.