{"title":"An Empirical Comparison of Joint-Training and Pre-Training for Domain-Agnostic Semi-Supervised Learning Via Energy-Based Models","authors":"Yunfu Song, Huahuan Zheng, Zhijian Ou","doi":"10.1109/mlsp52302.2021.9596559","DOIUrl":null,"url":null,"abstract":"Some semi-supervised learning (SSL) methods heavily rely on domain-specific data augmentations. Recently, semi-supervised learning (SSL) via energy-based models (EBMs) has been studied and is attractive from the perspective of being domain-agnostic, since it inherently does not require data augmentations. There exist two different methods for EBM based SSL - joint-training and pre-training. Joint-training estimates the joint distribution of observations and labels, while pre-training is taken over observations only and followed by fine-tuning. Both joint-training and pre-training are previously known in the literature, but it is unclear which one is better when evaluated in a common experimental setup. To the best of our knowledge, this paper is the first to systematically compare joint-training and pre-training for EBM-based for SSL, by conducting a suite of experiments across a variety of domains such as image classification and natural language labeling. It is found that joint-training EBMs outperform pre-training EBMs marginally but nearly consistently, presumably because the optimization of joint-training is directly related to the targeted task, while pre-training does not.","PeriodicalId":156116,"journal":{"name":"2021 IEEE 31st International Workshop on Machine Learning for Signal Processing (MLSP)","volume":"93 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 31st International Workshop on Machine Learning for Signal Processing (MLSP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/mlsp52302.2021.9596559","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Some semi-supervised learning (SSL) methods heavily rely on domain-specific data augmentations. Recently, semi-supervised learning (SSL) via energy-based models (EBMs) has been studied and is attractive from the perspective of being domain-agnostic, since it inherently does not require data augmentations. There exist two different methods for EBM based SSL - joint-training and pre-training. Joint-training estimates the joint distribution of observations and labels, while pre-training is taken over observations only and followed by fine-tuning. Both joint-training and pre-training are previously known in the literature, but it is unclear which one is better when evaluated in a common experimental setup. To the best of our knowledge, this paper is the first to systematically compare joint-training and pre-training for EBM-based for SSL, by conducting a suite of experiments across a variety of domains such as image classification and natural language labeling. It is found that joint-training EBMs outperform pre-training EBMs marginally but nearly consistently, presumably because the optimization of joint-training is directly related to the targeted task, while pre-training does not.