Ruibin Feng, Zongwei Zhou, Michael B Gotway, Jianming Liang
{"title":"Parts2Whole:通过重构进行自监督对比学习","authors":"Ruibin Feng, Zongwei Zhou, Michael B Gotway, Jianming Liang","doi":"10.1007/978-3-030-60548-3_9","DOIUrl":null,"url":null,"abstract":"<p><p>Contrastive representation learning is the state of the art in computer vision, but requires huge mini-batch sizes, special network design, or memory banks, making it unappealing for 3D medical imaging, while in 3D medical imaging, reconstruction-based self-supervised learning reaches a new height in performance, but lacks mechanisms to learn contrastive representation; therefore, this paper proposes a new framework for self-supervised contrastive learning via reconstruction, called Parts2Whole, because it exploits the <i>universal</i> and <i>intrinsic</i> part-whole relationship to learn contrastive representation without using contrastive loss: Reconstructing an image (whole) from its own parts compels the model to learn similar latent features for all its own parts, while reconstructing different images (wholes) from their respective parts forces the model to simultaneously push those parts belonging to different wholes farther apart from each other in the latent space; thereby the trained model is capable of distinguishing images. We have evaluated our Parts2Whole on five distinct imaging tasks covering both classification and segmentation, and compared it with four competing publicly available 3D pretrained models, showing that Parts2Whole significantly outperforms in two out of five tasks while achieves competitive performance on the rest three. This superior performance is attributable to the contrastive representations learned with Parts2Whole. Codes and pretrained models are available at github.com/JLiangLab/Parts2Whole.</p>","PeriodicalId":9441,"journal":{"name":"Cancer research","volume":"73 1","pages":"85-95"},"PeriodicalIF":12.5000,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9197952/pdf/","citationCount":"0","resultStr":"{\"title\":\"Parts2Whole: Self-supervised Contrastive Learning via Reconstruction.\",\"authors\":\"Ruibin Feng, Zongwei Zhou, Michael B Gotway, Jianming Liang\",\"doi\":\"10.1007/978-3-030-60548-3_9\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Contrastive representation learning is the state of the art in computer vision, but requires huge mini-batch sizes, special network design, or memory banks, making it unappealing for 3D medical imaging, while in 3D medical imaging, reconstruction-based self-supervised learning reaches a new height in performance, but lacks mechanisms to learn contrastive representation; therefore, this paper proposes a new framework for self-supervised contrastive learning via reconstruction, called Parts2Whole, because it exploits the <i>universal</i> and <i>intrinsic</i> part-whole relationship to learn contrastive representation without using contrastive loss: Reconstructing an image (whole) from its own parts compels the model to learn similar latent features for all its own parts, while reconstructing different images (wholes) from their respective parts forces the model to simultaneously push those parts belonging to different wholes farther apart from each other in the latent space; thereby the trained model is capable of distinguishing images. We have evaluated our Parts2Whole on five distinct imaging tasks covering both classification and segmentation, and compared it with four competing publicly available 3D pretrained models, showing that Parts2Whole significantly outperforms in two out of five tasks while achieves competitive performance on the rest three. This superior performance is attributable to the contrastive representations learned with Parts2Whole. Codes and pretrained models are available at github.com/JLiangLab/Parts2Whole.</p>\",\"PeriodicalId\":9441,\"journal\":{\"name\":\"Cancer research\",\"volume\":\"73 1\",\"pages\":\"85-95\"},\"PeriodicalIF\":12.5000,\"publicationDate\":\"2020-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9197952/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Cancer research\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1007/978-3-030-60548-3_9\",\"RegionNum\":1,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2020/9/26 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q1\",\"JCRName\":\"ONCOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cancer research","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/978-3-030-60548-3_9","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2020/9/26 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"ONCOLOGY","Score":null,"Total":0}
Parts2Whole: Self-supervised Contrastive Learning via Reconstruction.
Contrastive representation learning is the state of the art in computer vision, but requires huge mini-batch sizes, special network design, or memory banks, making it unappealing for 3D medical imaging, while in 3D medical imaging, reconstruction-based self-supervised learning reaches a new height in performance, but lacks mechanisms to learn contrastive representation; therefore, this paper proposes a new framework for self-supervised contrastive learning via reconstruction, called Parts2Whole, because it exploits the universal and intrinsic part-whole relationship to learn contrastive representation without using contrastive loss: Reconstructing an image (whole) from its own parts compels the model to learn similar latent features for all its own parts, while reconstructing different images (wholes) from their respective parts forces the model to simultaneously push those parts belonging to different wholes farther apart from each other in the latent space; thereby the trained model is capable of distinguishing images. We have evaluated our Parts2Whole on five distinct imaging tasks covering both classification and segmentation, and compared it with four competing publicly available 3D pretrained models, showing that Parts2Whole significantly outperforms in two out of five tasks while achieves competitive performance on the rest three. This superior performance is attributable to the contrastive representations learned with Parts2Whole. Codes and pretrained models are available at github.com/JLiangLab/Parts2Whole.
期刊介绍:
Cancer Research, published by the American Association for Cancer Research (AACR), is a journal that focuses on impactful original studies, reviews, and opinion pieces relevant to the broad cancer research community. Manuscripts that present conceptual or technological advances leading to insights into cancer biology are particularly sought after. The journal also places emphasis on convergence science, which involves bridging multiple distinct areas of cancer research.
With primary subsections including Cancer Biology, Cancer Immunology, Cancer Metabolism and Molecular Mechanisms, Translational Cancer Biology, Cancer Landscapes, and Convergence Science, Cancer Research has a comprehensive scope. It is published twice a month and has one volume per year, with a print ISSN of 0008-5472 and an online ISSN of 1538-7445.
Cancer Research is abstracted and/or indexed in various databases and platforms, including BIOSIS Previews (R) Database, MEDLINE, Current Contents/Life Sciences, Current Contents/Clinical Medicine, Science Citation Index, Scopus, and Web of Science.