{"title":"Chinese Coreference Resolution via Bidirectional LSTMs using Word and Token Level Representations","authors":"Kun Ming","doi":"10.1109/CIS52066.2020.00024","DOIUrl":null,"url":null,"abstract":"Coreference resolution is an important task in the field of natural language processing. Most existing methods usually utilize word-level representations, ignoring massive information from the texts. To address this issue, we investigate how to improve Chinese coreference resolution by using span-level semantic representations. Specifically, we propose a model which acquires word and character representations through pre-trained Skip-Gram embeddings and pre-trained BERT, then explicitly leverages span-level information by performing bidirectional LSTMs among above representations. Experiments on CoNLL-2012 shared task have demonstrated that the proposed model achieves 62.95% F1-score, outperforming our baseline methods.","PeriodicalId":106959,"journal":{"name":"2020 16th International Conference on Computational Intelligence and Security (CIS)","volume":"120 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 16th International Conference on Computational Intelligence and Security (CIS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CIS52066.2020.00024","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Coreference resolution is an important task in the field of natural language processing. Most existing methods usually utilize word-level representations, ignoring massive information from the texts. To address this issue, we investigate how to improve Chinese coreference resolution by using span-level semantic representations. Specifically, we propose a model which acquires word and character representations through pre-trained Skip-Gram embeddings and pre-trained BERT, then explicitly leverages span-level information by performing bidirectional LSTMs among above representations. Experiments on CoNLL-2012 shared task have demonstrated that the proposed model achieves 62.95% F1-score, outperforming our baseline methods.