Rui Fan, Yuan Wang, Lei Qiao, Ruiwen Yao, Peng Han, Weidong Zhang, I. Pitas, Ming Liu
{"title":"PT-ResNet:基于视角变换的残差网络语义道路图像分割","authors":"Rui Fan, Yuan Wang, Lei Qiao, Ruiwen Yao, Peng Han, Weidong Zhang, I. Pitas, Ming Liu","doi":"10.1109/IST48021.2019.9010501","DOIUrl":null,"url":null,"abstract":"Semantic road region segmentation is a high-level task, which paves the way towards road scene understanding. This paper presents a residual network trained for semantic road segmentation. Firstly, we represent the projections of road disparities in the v-disparity map as a linear model, which can be estimated by optimizing the v-disparity map using dynamic programming. This linear model is then utilized to reduce the redundant information in the left and right road images. The right image is also transformed into the left perspective view, which greatly enhances the road surface similarity between the two images. Finally, the processed stereo images and their disparity maps are concatenated to create a set of 3D images, which are then utilized to train our neural network. The experimental results illustrate that our network achieves a maximum F1-measure of approximately 91.19%, when analyzing the images from the KITTI road dataset.","PeriodicalId":117219,"journal":{"name":"2019 IEEE International Conference on Imaging Systems and Techniques (IST)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"PT-ResNet: Perspective Transformation-Based Residual Network for Semantic Road Image Segmentation\",\"authors\":\"Rui Fan, Yuan Wang, Lei Qiao, Ruiwen Yao, Peng Han, Weidong Zhang, I. Pitas, Ming Liu\",\"doi\":\"10.1109/IST48021.2019.9010501\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Semantic road region segmentation is a high-level task, which paves the way towards road scene understanding. This paper presents a residual network trained for semantic road segmentation. Firstly, we represent the projections of road disparities in the v-disparity map as a linear model, which can be estimated by optimizing the v-disparity map using dynamic programming. This linear model is then utilized to reduce the redundant information in the left and right road images. The right image is also transformed into the left perspective view, which greatly enhances the road surface similarity between the two images. Finally, the processed stereo images and their disparity maps are concatenated to create a set of 3D images, which are then utilized to train our neural network. The experimental results illustrate that our network achieves a maximum F1-measure of approximately 91.19%, when analyzing the images from the KITTI road dataset.\",\"PeriodicalId\":117219,\"journal\":{\"name\":\"2019 IEEE International Conference on Imaging Systems and Techniques (IST)\",\"volume\":\"27 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-10-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 IEEE International Conference on Imaging Systems and Techniques (IST)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IST48021.2019.9010501\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE International Conference on Imaging Systems and Techniques (IST)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IST48021.2019.9010501","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
PT-ResNet: Perspective Transformation-Based Residual Network for Semantic Road Image Segmentation
Semantic road region segmentation is a high-level task, which paves the way towards road scene understanding. This paper presents a residual network trained for semantic road segmentation. Firstly, we represent the projections of road disparities in the v-disparity map as a linear model, which can be estimated by optimizing the v-disparity map using dynamic programming. This linear model is then utilized to reduce the redundant information in the left and right road images. The right image is also transformed into the left perspective view, which greatly enhances the road surface similarity between the two images. Finally, the processed stereo images and their disparity maps are concatenated to create a set of 3D images, which are then utilized to train our neural network. The experimental results illustrate that our network achieves a maximum F1-measure of approximately 91.19%, when analyzing the images from the KITTI road dataset.