Daoming Ji, W. You, Yisong Chen, Guoping Wang, Sheng Li
{"title":"语义辅助的特征点提取与匹配统一网络","authors":"Daoming Ji, W. You, Yisong Chen, Guoping Wang, Sheng Li","doi":"10.1145/3574131.3574433","DOIUrl":null,"url":null,"abstract":"Feature point matching between two images is an essential part of 3D reconstruction, augmented reality, panorama stitching, etc. The quality of the initial feature point matching stage greatly affects the overall performance of a system. We present a unified feature point extraction-matching method, making use of semantic segmentation results to constrain feature point matching. To integrate high-level semantic information into feature points efficiently, we propose a unified feature point extraction and matching network, called SP-Net, which can detect feature points and generate feature descriptors simultaneously and perform feature point matching with accurate outcomes. Compared with previous works, our method can extract multi-scale context of the image, including shallow information and high-level semantic information of the local area, which is more stable when handling complex conditions such as changing illumination or large viewpoint. In evaluating the feature-matching benchmark, our method shows superior performance over the state-of-art method. As further validation, we propose SP-Net++ as an extension for 3D reconstruction. The experimental results show that our neural network can obtain accurate feature point positioning and robust feature matching to recover more cameras and get a well-shaped point cloud. Our semantic-assisted method can improve the stability of feature points as well as specific applicability for complex scenes.","PeriodicalId":111802,"journal":{"name":"Proceedings of the 18th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"15 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Semantic-assisted Unified Network for Feature Point Extraction and Matching\",\"authors\":\"Daoming Ji, W. You, Yisong Chen, Guoping Wang, Sheng Li\",\"doi\":\"10.1145/3574131.3574433\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Feature point matching between two images is an essential part of 3D reconstruction, augmented reality, panorama stitching, etc. The quality of the initial feature point matching stage greatly affects the overall performance of a system. We present a unified feature point extraction-matching method, making use of semantic segmentation results to constrain feature point matching. To integrate high-level semantic information into feature points efficiently, we propose a unified feature point extraction and matching network, called SP-Net, which can detect feature points and generate feature descriptors simultaneously and perform feature point matching with accurate outcomes. Compared with previous works, our method can extract multi-scale context of the image, including shallow information and high-level semantic information of the local area, which is more stable when handling complex conditions such as changing illumination or large viewpoint. In evaluating the feature-matching benchmark, our method shows superior performance over the state-of-art method. As further validation, we propose SP-Net++ as an extension for 3D reconstruction. The experimental results show that our neural network can obtain accurate feature point positioning and robust feature matching to recover more cameras and get a well-shaped point cloud. Our semantic-assisted method can improve the stability of feature points as well as specific applicability for complex scenes.\",\"PeriodicalId\":111802,\"journal\":{\"name\":\"Proceedings of the 18th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry\",\"volume\":\"15 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 18th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3574131.3574433\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 18th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3574131.3574433","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Semantic-assisted Unified Network for Feature Point Extraction and Matching
Feature point matching between two images is an essential part of 3D reconstruction, augmented reality, panorama stitching, etc. The quality of the initial feature point matching stage greatly affects the overall performance of a system. We present a unified feature point extraction-matching method, making use of semantic segmentation results to constrain feature point matching. To integrate high-level semantic information into feature points efficiently, we propose a unified feature point extraction and matching network, called SP-Net, which can detect feature points and generate feature descriptors simultaneously and perform feature point matching with accurate outcomes. Compared with previous works, our method can extract multi-scale context of the image, including shallow information and high-level semantic information of the local area, which is more stable when handling complex conditions such as changing illumination or large viewpoint. In evaluating the feature-matching benchmark, our method shows superior performance over the state-of-art method. As further validation, we propose SP-Net++ as an extension for 3D reconstruction. The experimental results show that our neural network can obtain accurate feature point positioning and robust feature matching to recover more cameras and get a well-shaped point cloud. Our semantic-assisted method can improve the stability of feature points as well as specific applicability for complex scenes.