{"title":"基于Transformer的点云语义分割多视图网络","authors":"Zhongwei Hua, Daming Du","doi":"10.1145/3529466.3529504","DOIUrl":null,"url":null,"abstract":"The input of most point cloud semantic segmentation networks is the reconstructed complete point cloud, but in practical application scenarios, the vision devices often capture single frame point cloud data. In order to better adapt to the actual segmentation requirements in dynamic scenes, this paper proposes an online incremental point cloud semantic segmentation method, which inputs the existing saved point cloud and the currently captured point cloud into the network to make up for the lack of information in the single frame point cloud. The Transformer structure is added to the network to strengthen the fusion of contextual information. Triple Loss is introduced in the feature space to distinguish different types of point clouds in a fine-grained manner. The experimental results show that compared with the benchmark MCPNet model, the proposed semantic segmentation model improves mIoU by 2.8% and mAcc by 7% on the S3DIS Area5 dataset, further improving the accuracy of point cloud semantic segmentation.","PeriodicalId":375562,"journal":{"name":"Proceedings of the 2022 6th International Conference on Innovation in Artificial Intelligence","volume":"13 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Multi-view Network with Transformer for Point Cloud Semantic Segmentation\",\"authors\":\"Zhongwei Hua, Daming Du\",\"doi\":\"10.1145/3529466.3529504\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The input of most point cloud semantic segmentation networks is the reconstructed complete point cloud, but in practical application scenarios, the vision devices often capture single frame point cloud data. In order to better adapt to the actual segmentation requirements in dynamic scenes, this paper proposes an online incremental point cloud semantic segmentation method, which inputs the existing saved point cloud and the currently captured point cloud into the network to make up for the lack of information in the single frame point cloud. The Transformer structure is added to the network to strengthen the fusion of contextual information. Triple Loss is introduced in the feature space to distinguish different types of point clouds in a fine-grained manner. The experimental results show that compared with the benchmark MCPNet model, the proposed semantic segmentation model improves mIoU by 2.8% and mAcc by 7% on the S3DIS Area5 dataset, further improving the accuracy of point cloud semantic segmentation.\",\"PeriodicalId\":375562,\"journal\":{\"name\":\"Proceedings of the 2022 6th International Conference on Innovation in Artificial Intelligence\",\"volume\":\"13 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-03-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2022 6th International Conference on Innovation in Artificial Intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3529466.3529504\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2022 6th International Conference on Innovation in Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3529466.3529504","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Multi-view Network with Transformer for Point Cloud Semantic Segmentation
The input of most point cloud semantic segmentation networks is the reconstructed complete point cloud, but in practical application scenarios, the vision devices often capture single frame point cloud data. In order to better adapt to the actual segmentation requirements in dynamic scenes, this paper proposes an online incremental point cloud semantic segmentation method, which inputs the existing saved point cloud and the currently captured point cloud into the network to make up for the lack of information in the single frame point cloud. The Transformer structure is added to the network to strengthen the fusion of contextual information. Triple Loss is introduced in the feature space to distinguish different types of point clouds in a fine-grained manner. The experimental results show that compared with the benchmark MCPNet model, the proposed semantic segmentation model improves mIoU by 2.8% and mAcc by 7% on the S3DIS Area5 dataset, further improving the accuracy of point cloud semantic segmentation.