{"title":"Prototype and Metric Based Prediction for Data-Efficient Training","authors":"Gaowei Zhou","doi":"10.1109/acmlc58173.2022.00019","DOIUrl":null,"url":null,"abstract":"We propose a prototype- and metric-based prediction method together with several training pipelines suitable for training a network without using any additional data in the few-shot learning tasks with different intra-class variances. Being tested on two datasets commonly used for few-shot learning, our method has shown satisfactory ability to improve data efficiency and prevent overfitting. It even competes with the meta-learning-based method trained with a lot of extra labeled samples on the dataset with low intra-class variance and shows no significant performance gap when it comes to the dataset with a high intra-class variance. We reported 99.0% acc on the Omniglot dataset and 48.0% acc on the mini-ImageNet for 5-way 5-shot tasks.","PeriodicalId":375920,"journal":{"name":"2022 5th Asia Conference on Machine Learning and Computing (ACMLC)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 5th Asia Conference on Machine Learning and Computing (ACMLC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/acmlc58173.2022.00019","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
We propose a prototype- and metric-based prediction method together with several training pipelines suitable for training a network without using any additional data in the few-shot learning tasks with different intra-class variances. Being tested on two datasets commonly used for few-shot learning, our method has shown satisfactory ability to improve data efficiency and prevent overfitting. It even competes with the meta-learning-based method trained with a lot of extra labeled samples on the dataset with low intra-class variance and shows no significant performance gap when it comes to the dataset with a high intra-class variance. We reported 99.0% acc on the Omniglot dataset and 48.0% acc on the mini-ImageNet for 5-way 5-shot tasks.