Congying Xia, Chenwei Zhang, Jiawei Zhang, Tingting Liang, Hao Peng, Philip S. Yu
{"title":"Low-shot Learning in Natural Language Processing","authors":"Congying Xia, Chenwei Zhang, Jiawei Zhang, Tingting Liang, Hao Peng, Philip S. Yu","doi":"10.1109/CogMI50398.2020.00031","DOIUrl":null,"url":null,"abstract":"This paper study the low-shot learning paradigm in Natural Language Processing (NLP), which aims to provide the ability that can adapt to new tasks or new domains with limited annotation data, like zero or few labeled examples. Specifically, Low-shot learning unifies the zero-shot and few-shot learning paradigm. Diverse low-shot learning approaches, including capsule-based networks, data-augmentation methods, and memory networks, are discussed for different NLP tasks, for example, intent detection and named entity typing. We also provide potential future directions for low-shot learning in NLP.","PeriodicalId":360326,"journal":{"name":"2020 IEEE Second International Conference on Cognitive Machine Intelligence (CogMI)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE Second International Conference on Cognitive Machine Intelligence (CogMI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CogMI50398.2020.00031","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
This paper study the low-shot learning paradigm in Natural Language Processing (NLP), which aims to provide the ability that can adapt to new tasks or new domains with limited annotation data, like zero or few labeled examples. Specifically, Low-shot learning unifies the zero-shot and few-shot learning paradigm. Diverse low-shot learning approaches, including capsule-based networks, data-augmentation methods, and memory networks, are discussed for different NLP tasks, for example, intent detection and named entity typing. We also provide potential future directions for low-shot learning in NLP.