{"title":"使用深度学习方法的跨语言文本蕴涵","authors":"Wubie Belay, M. Meshesha, Dagnachew Melesew","doi":"10.1109/ict4da53266.2021.9672220","DOIUrl":null,"url":null,"abstract":"Natural Language processing is dealing with natural language understandings and natural language generation which enable computers to understand and analyze human language. Cross-lingual Textual Entailment (CLTE) is one of the applications of NLU if there exists premise (P) as a source language and hypothesis (H) as a target language. CLTE is challenging for transferring information between under resource (Amharic) language and high resource (English) language. To solve this problem, we have proposed Cross-lingual Textual Entailment model using deep neural network approaches. We have used Bi-LSTM to transfer sequential information, XLNet for handling a position of word and its boundary, MLP for classification and prediction outputs, and FastText to word representations. Neural machine translation is utilized for translating English sentences into Amharic sentences with IBM5 alignment. We have combined Amharic dataset with SNLI dataset and annotated based on multi-way classification. The NMT predicts 96.01% of the testing accuracy. We have obtained 89.92% training and 86.89% testing accuracy for the proposed model. The issue with this research is that it ignores multiple inferences.","PeriodicalId":371663,"journal":{"name":"2021 International Conference on Information and Communication Technology for Development for Africa (ICT4DA)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Cross-lingual textual entailment using deep learning approach\",\"authors\":\"Wubie Belay, M. Meshesha, Dagnachew Melesew\",\"doi\":\"10.1109/ict4da53266.2021.9672220\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Natural Language processing is dealing with natural language understandings and natural language generation which enable computers to understand and analyze human language. Cross-lingual Textual Entailment (CLTE) is one of the applications of NLU if there exists premise (P) as a source language and hypothesis (H) as a target language. CLTE is challenging for transferring information between under resource (Amharic) language and high resource (English) language. To solve this problem, we have proposed Cross-lingual Textual Entailment model using deep neural network approaches. We have used Bi-LSTM to transfer sequential information, XLNet for handling a position of word and its boundary, MLP for classification and prediction outputs, and FastText to word representations. Neural machine translation is utilized for translating English sentences into Amharic sentences with IBM5 alignment. We have combined Amharic dataset with SNLI dataset and annotated based on multi-way classification. The NMT predicts 96.01% of the testing accuracy. We have obtained 89.92% training and 86.89% testing accuracy for the proposed model. The issue with this research is that it ignores multiple inferences.\",\"PeriodicalId\":371663,\"journal\":{\"name\":\"2021 International Conference on Information and Communication Technology for Development for Africa (ICT4DA)\",\"volume\":\"61 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-11-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 International Conference on Information and Communication Technology for Development for Africa (ICT4DA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ict4da53266.2021.9672220\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Conference on Information and Communication Technology for Development for Africa (ICT4DA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ict4da53266.2021.9672220","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Cross-lingual textual entailment using deep learning approach
Natural Language processing is dealing with natural language understandings and natural language generation which enable computers to understand and analyze human language. Cross-lingual Textual Entailment (CLTE) is one of the applications of NLU if there exists premise (P) as a source language and hypothesis (H) as a target language. CLTE is challenging for transferring information between under resource (Amharic) language and high resource (English) language. To solve this problem, we have proposed Cross-lingual Textual Entailment model using deep neural network approaches. We have used Bi-LSTM to transfer sequential information, XLNet for handling a position of word and its boundary, MLP for classification and prediction outputs, and FastText to word representations. Neural machine translation is utilized for translating English sentences into Amharic sentences with IBM5 alignment. We have combined Amharic dataset with SNLI dataset and annotated based on multi-way classification. The NMT predicts 96.01% of the testing accuracy. We have obtained 89.92% training and 86.89% testing accuracy for the proposed model. The issue with this research is that it ignores multiple inferences.