Christopher T. Symons, N. Samatova, R. Krishnamurthy, Byung-Hoon Park, Tarik Umar, David J. Buttler, T. Critchlow, D. Hysom
{"title":"Multi-Criterion Active Learning in Conditional Random Fields","authors":"Christopher T. Symons, N. Samatova, R. Krishnamurthy, Byung-Hoon Park, Tarik Umar, David J. Buttler, T. Critchlow, D. Hysom","doi":"10.1109/ICTAI.2006.90","DOIUrl":null,"url":null,"abstract":"Conditional random fields (CRFs), which are popular supervised learning models for many natural language processing (NLP) tasks, typically require a large collection of labeled data for training. In practice, however, manual annotation of text documents is quite costly. Furthermore, even large labeled training sets can have arbitrarily limited performance peaks if they are not chosen with care. This paper considers the use of multi-criterion active learning for identification of a small but sufficient set of text samples for training CRFs. Our empirical results demonstrate that our method is capable of reducing the manual annotation costs, while also limiting the retraining costs that are often associated with active learning. In addition, we show that the generalization performance of CRFs can be enhanced through judicious selection of training examples","PeriodicalId":169424,"journal":{"name":"2006 18th IEEE International Conference on Tools with Artificial Intelligence (ICTAI'06)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2006-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"23","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2006 18th IEEE International Conference on Tools with Artificial Intelligence (ICTAI'06)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICTAI.2006.90","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 23
Abstract
Conditional random fields (CRFs), which are popular supervised learning models for many natural language processing (NLP) tasks, typically require a large collection of labeled data for training. In practice, however, manual annotation of text documents is quite costly. Furthermore, even large labeled training sets can have arbitrarily limited performance peaks if they are not chosen with care. This paper considers the use of multi-criterion active learning for identification of a small but sufficient set of text samples for training CRFs. Our empirical results demonstrate that our method is capable of reducing the manual annotation costs, while also limiting the retraining costs that are often associated with active learning. In addition, we show that the generalization performance of CRFs can be enhanced through judicious selection of training examples