{"title":"弱监督点云语义分割的跨云一致性","authors":"Yachao Zhang;Yuxiang Lan;Yuan Xie;Cuihua Li;Yanyun Qu","doi":"10.1109/TNNLS.2025.3526164","DOIUrl":null,"url":null,"abstract":"Weakly supervised point cloud semantic segmentation is an increasingly active topic, because fully supervised learning acquires well-labeled point clouds and entails high costs. The existing weakly supervised methods either need meticulously designed data augmentation for self-supervised learning or ignore the negative effects of learning on pseudolabel noises. In this article, by designing different granularity of cross-cloud structures, we propose a cross-cloud consistency method for weakly supervised point cloud semantic segmentation which forms the expectation-maximum (EM) framework. Benefiting from the cross-cloud constraints, our method allows effective learning alternatively between refining pseudolabels and updating network parameters. Specifically, in E-step, we propose a pseudolabel selecting (PLS) strategy based on cross subcloud consistency, improving the credibility of selected pseudolabels explicitly. In M-step, a cross-scene contrastive regularization enforces cross-scene prototypes with the same label in different scenes to be more similar, while keeping prototypes with different labels to be a clear margin, reducing the noise fitting. Finally, we give some insight into the optimization of our method in the EM theoretical way. The proposed method is evaluated on three challenging datasets, where experimental results demonstrate that our method significantly outperforms state-of-the-art weakly supervised competitors. Our code is available online: <uri>https://github.com/Yachao-Zhang/Cross-Cloud-Consistency</uri>.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"36 8","pages":"14452-14463"},"PeriodicalIF":8.9000,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Cross-Cloud Consistency for Weakly Supervised Point Cloud Semantic Segmentation\",\"authors\":\"Yachao Zhang;Yuxiang Lan;Yuan Xie;Cuihua Li;Yanyun Qu\",\"doi\":\"10.1109/TNNLS.2025.3526164\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Weakly supervised point cloud semantic segmentation is an increasingly active topic, because fully supervised learning acquires well-labeled point clouds and entails high costs. The existing weakly supervised methods either need meticulously designed data augmentation for self-supervised learning or ignore the negative effects of learning on pseudolabel noises. In this article, by designing different granularity of cross-cloud structures, we propose a cross-cloud consistency method for weakly supervised point cloud semantic segmentation which forms the expectation-maximum (EM) framework. Benefiting from the cross-cloud constraints, our method allows effective learning alternatively between refining pseudolabels and updating network parameters. Specifically, in E-step, we propose a pseudolabel selecting (PLS) strategy based on cross subcloud consistency, improving the credibility of selected pseudolabels explicitly. In M-step, a cross-scene contrastive regularization enforces cross-scene prototypes with the same label in different scenes to be more similar, while keeping prototypes with different labels to be a clear margin, reducing the noise fitting. Finally, we give some insight into the optimization of our method in the EM theoretical way. The proposed method is evaluated on three challenging datasets, where experimental results demonstrate that our method significantly outperforms state-of-the-art weakly supervised competitors. Our code is available online: <uri>https://github.com/Yachao-Zhang/Cross-Cloud-Consistency</uri>.\",\"PeriodicalId\":13303,\"journal\":{\"name\":\"IEEE transactions on neural networks and learning systems\",\"volume\":\"36 8\",\"pages\":\"14452-14463\"},\"PeriodicalIF\":8.9000,\"publicationDate\":\"2025-01-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on neural networks and learning systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10843141/\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on neural networks and learning systems","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10843141/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Cross-Cloud Consistency for Weakly Supervised Point Cloud Semantic Segmentation
Weakly supervised point cloud semantic segmentation is an increasingly active topic, because fully supervised learning acquires well-labeled point clouds and entails high costs. The existing weakly supervised methods either need meticulously designed data augmentation for self-supervised learning or ignore the negative effects of learning on pseudolabel noises. In this article, by designing different granularity of cross-cloud structures, we propose a cross-cloud consistency method for weakly supervised point cloud semantic segmentation which forms the expectation-maximum (EM) framework. Benefiting from the cross-cloud constraints, our method allows effective learning alternatively between refining pseudolabels and updating network parameters. Specifically, in E-step, we propose a pseudolabel selecting (PLS) strategy based on cross subcloud consistency, improving the credibility of selected pseudolabels explicitly. In M-step, a cross-scene contrastive regularization enforces cross-scene prototypes with the same label in different scenes to be more similar, while keeping prototypes with different labels to be a clear margin, reducing the noise fitting. Finally, we give some insight into the optimization of our method in the EM theoretical way. The proposed method is evaluated on three challenging datasets, where experimental results demonstrate that our method significantly outperforms state-of-the-art weakly supervised competitors. Our code is available online: https://github.com/Yachao-Zhang/Cross-Cloud-Consistency.
期刊介绍:
The focus of IEEE Transactions on Neural Networks and Learning Systems is to present scholarly articles discussing the theory, design, and applications of neural networks as well as other learning systems. The journal primarily highlights technical and scientific research in this domain.