{"title":"基于编码器-解码器的神经网络视角估计","authors":"Yutong Wang, Qi Zhang, Joongkyu Kim, Huifang Li","doi":"10.1145/3469951.3469967","DOIUrl":null,"url":null,"abstract":"In the images processing field, we tend to use auxiliary information to assist the network for deep analysis, and perspective value is one of the auxiliary information that we frequently use. It can effectively solve the issue of perspective distortion. But most datasets cannot provide the perspective value of the image, so we devote to building a network, named perspective estimation network (PENet), that can extract the perspective value from the input image. In this paper, we propose an innovative training method that can accurately predict the perspective value. We trained the PENet on the WorldExpo’10 dataset and the test results show that our method is highly effective.","PeriodicalId":313453,"journal":{"name":"Proceedings of the 2021 3rd International Conference on Image Processing and Machine Vision","volume":"43 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Encoder-Decoder based Neural Network for Perspective Estimation\",\"authors\":\"Yutong Wang, Qi Zhang, Joongkyu Kim, Huifang Li\",\"doi\":\"10.1145/3469951.3469967\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In the images processing field, we tend to use auxiliary information to assist the network for deep analysis, and perspective value is one of the auxiliary information that we frequently use. It can effectively solve the issue of perspective distortion. But most datasets cannot provide the perspective value of the image, so we devote to building a network, named perspective estimation network (PENet), that can extract the perspective value from the input image. In this paper, we propose an innovative training method that can accurately predict the perspective value. We trained the PENet on the WorldExpo’10 dataset and the test results show that our method is highly effective.\",\"PeriodicalId\":313453,\"journal\":{\"name\":\"Proceedings of the 2021 3rd International Conference on Image Processing and Machine Vision\",\"volume\":\"43 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-05-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2021 3rd International Conference on Image Processing and Machine Vision\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3469951.3469967\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2021 3rd International Conference on Image Processing and Machine Vision","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3469951.3469967","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Encoder-Decoder based Neural Network for Perspective Estimation
In the images processing field, we tend to use auxiliary information to assist the network for deep analysis, and perspective value is one of the auxiliary information that we frequently use. It can effectively solve the issue of perspective distortion. But most datasets cannot provide the perspective value of the image, so we devote to building a network, named perspective estimation network (PENet), that can extract the perspective value from the input image. In this paper, we propose an innovative training method that can accurately predict the perspective value. We trained the PENet on the WorldExpo’10 dataset and the test results show that our method is highly effective.