Bai Zhu , Yuanxin Ye , Jinkun Dai , Tao Peng , Jiwei Deng , Qing Zhu
{"title":"VDFT:利用视点不变的可变形特征变换对航空和地面图像进行稳健的特征匹配","authors":"Bai Zhu , Yuanxin Ye , Jinkun Dai , Tao Peng , Jiwei Deng , Qing Zhu","doi":"10.1016/j.isprsjprs.2024.09.016","DOIUrl":null,"url":null,"abstract":"<div><p>Establishing accurate correspondences between aerial and ground images is facing immense challenges because of the drastic viewpoint, illumination, and scale variations resulting from significant differences in viewing angles, shoot timing, and imaging mechanisms. To cope with these issues, we propose an effective aerial-to-ground feature matching method, named Viewpoint-invariant Deformable Feature Transformation (VDFT), which aims to comprehensively enhance the discrimination of local features by utilizing deformable convolutional network (DCN) and seed attention mechanism. Specifically, the proposed VDFT is constructed consisting of three pivotal modules: (1) a learnable deformable feature network is established by using DCN and Depthwise Separable Convolution (DSC) to obtain dynamic receptive fields, addressing local geometric deformations caused by viewpoint variation; (2) an improved joint detection and description strategy is presented through concurrently sharing the multi-level deformable feature representation to enhance the localization accuracy and representation capabilities of feature points; and (3) a seed attention matching module is built by introducing self- and cross- seed attention mechanisms to improve the performance and efficiency for aerial-to-ground feature matching. Finally, we conduct thorough experiments to verify the matching performance of our VDFT on five challenging aerial-to-ground datasets. Extensive experimental evaluations prove that our VDFT is more resistant to perspective distortion and drastic variations in viewpoint, illumination, and scale. It exhibits satisfactory matching performance and outperforms the current state-of-the-art (SOTA) methods in terms of robustness and accuracy.</p></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"218 ","pages":"Pages 311-325"},"PeriodicalIF":10.6000,"publicationDate":"2024-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"VDFT: Robust feature matching of aerial and ground images using viewpoint-invariant deformable feature transformation\",\"authors\":\"Bai Zhu , Yuanxin Ye , Jinkun Dai , Tao Peng , Jiwei Deng , Qing Zhu\",\"doi\":\"10.1016/j.isprsjprs.2024.09.016\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Establishing accurate correspondences between aerial and ground images is facing immense challenges because of the drastic viewpoint, illumination, and scale variations resulting from significant differences in viewing angles, shoot timing, and imaging mechanisms. To cope with these issues, we propose an effective aerial-to-ground feature matching method, named Viewpoint-invariant Deformable Feature Transformation (VDFT), which aims to comprehensively enhance the discrimination of local features by utilizing deformable convolutional network (DCN) and seed attention mechanism. Specifically, the proposed VDFT is constructed consisting of three pivotal modules: (1) a learnable deformable feature network is established by using DCN and Depthwise Separable Convolution (DSC) to obtain dynamic receptive fields, addressing local geometric deformations caused by viewpoint variation; (2) an improved joint detection and description strategy is presented through concurrently sharing the multi-level deformable feature representation to enhance the localization accuracy and representation capabilities of feature points; and (3) a seed attention matching module is built by introducing self- and cross- seed attention mechanisms to improve the performance and efficiency for aerial-to-ground feature matching. Finally, we conduct thorough experiments to verify the matching performance of our VDFT on five challenging aerial-to-ground datasets. Extensive experimental evaluations prove that our VDFT is more resistant to perspective distortion and drastic variations in viewpoint, illumination, and scale. It exhibits satisfactory matching performance and outperforms the current state-of-the-art (SOTA) methods in terms of robustness and accuracy.</p></div>\",\"PeriodicalId\":50269,\"journal\":{\"name\":\"ISPRS Journal of Photogrammetry and Remote Sensing\",\"volume\":\"218 \",\"pages\":\"Pages 311-325\"},\"PeriodicalIF\":10.6000,\"publicationDate\":\"2024-09-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ISPRS Journal of Photogrammetry and Remote Sensing\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S092427162400354X\",\"RegionNum\":1,\"RegionCategory\":\"地球科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"GEOGRAPHY, PHYSICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ISPRS Journal of Photogrammetry and Remote Sensing","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S092427162400354X","RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"GEOGRAPHY, PHYSICAL","Score":null,"Total":0}
VDFT: Robust feature matching of aerial and ground images using viewpoint-invariant deformable feature transformation
Establishing accurate correspondences between aerial and ground images is facing immense challenges because of the drastic viewpoint, illumination, and scale variations resulting from significant differences in viewing angles, shoot timing, and imaging mechanisms. To cope with these issues, we propose an effective aerial-to-ground feature matching method, named Viewpoint-invariant Deformable Feature Transformation (VDFT), which aims to comprehensively enhance the discrimination of local features by utilizing deformable convolutional network (DCN) and seed attention mechanism. Specifically, the proposed VDFT is constructed consisting of three pivotal modules: (1) a learnable deformable feature network is established by using DCN and Depthwise Separable Convolution (DSC) to obtain dynamic receptive fields, addressing local geometric deformations caused by viewpoint variation; (2) an improved joint detection and description strategy is presented through concurrently sharing the multi-level deformable feature representation to enhance the localization accuracy and representation capabilities of feature points; and (3) a seed attention matching module is built by introducing self- and cross- seed attention mechanisms to improve the performance and efficiency for aerial-to-ground feature matching. Finally, we conduct thorough experiments to verify the matching performance of our VDFT on five challenging aerial-to-ground datasets. Extensive experimental evaluations prove that our VDFT is more resistant to perspective distortion and drastic variations in viewpoint, illumination, and scale. It exhibits satisfactory matching performance and outperforms the current state-of-the-art (SOTA) methods in terms of robustness and accuracy.
期刊介绍:
The ISPRS Journal of Photogrammetry and Remote Sensing (P&RS) serves as the official journal of the International Society for Photogrammetry and Remote Sensing (ISPRS). It acts as a platform for scientists and professionals worldwide who are involved in various disciplines that utilize photogrammetry, remote sensing, spatial information systems, computer vision, and related fields. The journal aims to facilitate communication and dissemination of advancements in these disciplines, while also acting as a comprehensive source of reference and archive.
P&RS endeavors to publish high-quality, peer-reviewed research papers that are preferably original and have not been published before. These papers can cover scientific/research, technological development, or application/practical aspects. Additionally, the journal welcomes papers that are based on presentations from ISPRS meetings, as long as they are considered significant contributions to the aforementioned fields.
In particular, P&RS encourages the submission of papers that are of broad scientific interest, showcase innovative applications (especially in emerging fields), have an interdisciplinary focus, discuss topics that have received limited attention in P&RS or related journals, or explore new directions in scientific or professional realms. It is preferred that theoretical papers include practical applications, while papers focusing on systems and applications should include a theoretical background.