{"title":"医学图像自监督预训练的表征恢复","authors":"Xiangyi Yan, Junayed Naushad, Shanlin Sun, Kun Han, Hao Tang, Deying Kong, Haoyu Ma, Chenyu You, Xiaohui Xie","doi":"10.1109/WACV56688.2023.00271","DOIUrl":null,"url":null,"abstract":"Advances in self-supervised learning have drawn attention to developing techniques to extract effective visual representations from unlabeled images. Contrastive learning (CL) trains a model to extract consistent features by generating different views. Recent success of Masked Autoencoders (MAE) highlights the benefit of generative modeling in self-supervised learning. The generative approaches encode the input into a compact embedding and empower the model’s ability of recovering the original input. However, in our experiments, we found vanilla MAE mainly recovers coarse high level semantic information and is inadequate in recovering detailed low level information. We show that in dense downstream prediction tasks like multi-organ segmentation, directly applying MAE is not ideal. Here, we propose RepRec, a hybrid visual representation learning framework for self-supervised pre-training on large-scale unlabelled medical datasets, which takes advantage of both contrastive and generative modeling. To solve the aforementioned dilemma that MAE encounters, a convolutional encoder is pre-trained to provide low-level feature information, in a contrastive way; and a transformer encoder is pre-trained to produce high level semantic dependency, in a generative way – by recovering masked representations from the convolutional encoder. Extensive experiments on three multi-organ segmentation datasets demonstrate that our method outperforms current state-of-the-art methods.","PeriodicalId":270631,"journal":{"name":"2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Representation Recovering for Self-Supervised Pre-training on Medical Images\",\"authors\":\"Xiangyi Yan, Junayed Naushad, Shanlin Sun, Kun Han, Hao Tang, Deying Kong, Haoyu Ma, Chenyu You, Xiaohui Xie\",\"doi\":\"10.1109/WACV56688.2023.00271\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Advances in self-supervised learning have drawn attention to developing techniques to extract effective visual representations from unlabeled images. Contrastive learning (CL) trains a model to extract consistent features by generating different views. Recent success of Masked Autoencoders (MAE) highlights the benefit of generative modeling in self-supervised learning. The generative approaches encode the input into a compact embedding and empower the model’s ability of recovering the original input. However, in our experiments, we found vanilla MAE mainly recovers coarse high level semantic information and is inadequate in recovering detailed low level information. We show that in dense downstream prediction tasks like multi-organ segmentation, directly applying MAE is not ideal. Here, we propose RepRec, a hybrid visual representation learning framework for self-supervised pre-training on large-scale unlabelled medical datasets, which takes advantage of both contrastive and generative modeling. To solve the aforementioned dilemma that MAE encounters, a convolutional encoder is pre-trained to provide low-level feature information, in a contrastive way; and a transformer encoder is pre-trained to produce high level semantic dependency, in a generative way – by recovering masked representations from the convolutional encoder. Extensive experiments on three multi-organ segmentation datasets demonstrate that our method outperforms current state-of-the-art methods.\",\"PeriodicalId\":270631,\"journal\":{\"name\":\"2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)\",\"volume\":\"37 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/WACV56688.2023.00271\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/WACV56688.2023.00271","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Representation Recovering for Self-Supervised Pre-training on Medical Images
Advances in self-supervised learning have drawn attention to developing techniques to extract effective visual representations from unlabeled images. Contrastive learning (CL) trains a model to extract consistent features by generating different views. Recent success of Masked Autoencoders (MAE) highlights the benefit of generative modeling in self-supervised learning. The generative approaches encode the input into a compact embedding and empower the model’s ability of recovering the original input. However, in our experiments, we found vanilla MAE mainly recovers coarse high level semantic information and is inadequate in recovering detailed low level information. We show that in dense downstream prediction tasks like multi-organ segmentation, directly applying MAE is not ideal. Here, we propose RepRec, a hybrid visual representation learning framework for self-supervised pre-training on large-scale unlabelled medical datasets, which takes advantage of both contrastive and generative modeling. To solve the aforementioned dilemma that MAE encounters, a convolutional encoder is pre-trained to provide low-level feature information, in a contrastive way; and a transformer encoder is pre-trained to produce high level semantic dependency, in a generative way – by recovering masked representations from the convolutional encoder. Extensive experiments on three multi-organ segmentation datasets demonstrate that our method outperforms current state-of-the-art methods.