{"title":"基于点上采样和互信息神经估计的少镜头场景自监督点云学习","authors":"Jiawei Li, Yunan Huang, Yunqi Lei","doi":"10.1109/icccs55155.2022.9846258","DOIUrl":null,"url":null,"abstract":"Point cloud data is hard to obtain and time-consuming to be labelled. Self-supervised methods can utilize data without label, but it still needs large amount of data. The key to self-supervised methods lies in the design of pretext tasks. In this work, we propose a new self-supervised pretext task in few-shot learning scenario to further alleviate the data scarcity problem. Our self-supervised method learns by training the network to restore the original point cloud from the down-sampled point cloud. Although our point up-sampling pretext task as a kind of reconstruction task can ensure the learned representation contains sufficient information, it cannot guarantee its distinguishability. Thus, we introduce a Mutual Information Estimation and Maximization task to increase the distinguishability of the learned representation. Classification and segmentation results have shown that our method can learn efficient feature and increase the performance of down-stream models.","PeriodicalId":121713,"journal":{"name":"2022 7th International Conference on Computer and Communication Systems (ICCCS)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Self-Supervised Point Cloud Learning in Few-Shot Scenario by Point Up-Sampling and Mutual Information Neural Estimation\",\"authors\":\"Jiawei Li, Yunan Huang, Yunqi Lei\",\"doi\":\"10.1109/icccs55155.2022.9846258\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Point cloud data is hard to obtain and time-consuming to be labelled. Self-supervised methods can utilize data without label, but it still needs large amount of data. The key to self-supervised methods lies in the design of pretext tasks. In this work, we propose a new self-supervised pretext task in few-shot learning scenario to further alleviate the data scarcity problem. Our self-supervised method learns by training the network to restore the original point cloud from the down-sampled point cloud. Although our point up-sampling pretext task as a kind of reconstruction task can ensure the learned representation contains sufficient information, it cannot guarantee its distinguishability. Thus, we introduce a Mutual Information Estimation and Maximization task to increase the distinguishability of the learned representation. Classification and segmentation results have shown that our method can learn efficient feature and increase the performance of down-stream models.\",\"PeriodicalId\":121713,\"journal\":{\"name\":\"2022 7th International Conference on Computer and Communication Systems (ICCCS)\",\"volume\":\"56 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-04-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 7th International Conference on Computer and Communication Systems (ICCCS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/icccs55155.2022.9846258\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 7th International Conference on Computer and Communication Systems (ICCCS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/icccs55155.2022.9846258","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Self-Supervised Point Cloud Learning in Few-Shot Scenario by Point Up-Sampling and Mutual Information Neural Estimation
Point cloud data is hard to obtain and time-consuming to be labelled. Self-supervised methods can utilize data without label, but it still needs large amount of data. The key to self-supervised methods lies in the design of pretext tasks. In this work, we propose a new self-supervised pretext task in few-shot learning scenario to further alleviate the data scarcity problem. Our self-supervised method learns by training the network to restore the original point cloud from the down-sampled point cloud. Although our point up-sampling pretext task as a kind of reconstruction task can ensure the learned representation contains sufficient information, it cannot guarantee its distinguishability. Thus, we introduce a Mutual Information Estimation and Maximization task to increase the distinguishability of the learned representation. Classification and segmentation results have shown that our method can learn efficient feature and increase the performance of down-stream models.