{"title":"基于深度学习方法的古兰经多类不平衡分类","authors":"Aqsa Noor, Ahmad Ali","doi":"10.1109/ICCIS54243.2021.9676386","DOIUrl":null,"url":null,"abstract":"This paper uses deep learning approach for the classification of Quranic verses. The dataset has an imbalance, hence first it is balanced by oversampling. This paper aims to classify the verses using Bidirectional Encoder Representation from Transformers (BERT) word embedding by considering the context of words. BERT reads a word with all its neighboring words and assigns representations accordingly. Furthermore, to ensure that the classifier remembers the most important part of the input sequence, deep learning classifiers with Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) are used for classification. As the BERT word embeddings of the text data are created, they are fed to 3 Neural Network (NN) models i.e. NN with LSTM which achieved f1-score and accuracy of 0.85 for uncased, and f1-score of 0.82 and accuracy of 0.83 for cased embedding; NN with GRU which achieved F1-score and accuracy of 0.89 for uncased and 0.90 for cased embedding; and fine-tuned BERT model which achieved F1-scores of 0.93 and accuracy of 0.98 for both base-uncased and base-cased embedding.","PeriodicalId":165673,"journal":{"name":"2021 4th International Conference on Computing & Information Sciences (ICCIS)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Multiclass Imbalanced Classification of Quranic Verses Using Deep Learning Approach\",\"authors\":\"Aqsa Noor, Ahmad Ali\",\"doi\":\"10.1109/ICCIS54243.2021.9676386\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper uses deep learning approach for the classification of Quranic verses. The dataset has an imbalance, hence first it is balanced by oversampling. This paper aims to classify the verses using Bidirectional Encoder Representation from Transformers (BERT) word embedding by considering the context of words. BERT reads a word with all its neighboring words and assigns representations accordingly. Furthermore, to ensure that the classifier remembers the most important part of the input sequence, deep learning classifiers with Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) are used for classification. As the BERT word embeddings of the text data are created, they are fed to 3 Neural Network (NN) models i.e. NN with LSTM which achieved f1-score and accuracy of 0.85 for uncased, and f1-score of 0.82 and accuracy of 0.83 for cased embedding; NN with GRU which achieved F1-score and accuracy of 0.89 for uncased and 0.90 for cased embedding; and fine-tuned BERT model which achieved F1-scores of 0.93 and accuracy of 0.98 for both base-uncased and base-cased embedding.\",\"PeriodicalId\":165673,\"journal\":{\"name\":\"2021 4th International Conference on Computing & Information Sciences (ICCIS)\",\"volume\":\"24 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-11-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 4th International Conference on Computing & Information Sciences (ICCIS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCIS54243.2021.9676386\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 4th International Conference on Computing & Information Sciences (ICCIS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCIS54243.2021.9676386","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Multiclass Imbalanced Classification of Quranic Verses Using Deep Learning Approach
This paper uses deep learning approach for the classification of Quranic verses. The dataset has an imbalance, hence first it is balanced by oversampling. This paper aims to classify the verses using Bidirectional Encoder Representation from Transformers (BERT) word embedding by considering the context of words. BERT reads a word with all its neighboring words and assigns representations accordingly. Furthermore, to ensure that the classifier remembers the most important part of the input sequence, deep learning classifiers with Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) are used for classification. As the BERT word embeddings of the text data are created, they are fed to 3 Neural Network (NN) models i.e. NN with LSTM which achieved f1-score and accuracy of 0.85 for uncased, and f1-score of 0.82 and accuracy of 0.83 for cased embedding; NN with GRU which achieved F1-score and accuracy of 0.89 for uncased and 0.90 for cased embedding; and fine-tuned BERT model which achieved F1-scores of 0.93 and accuracy of 0.98 for both base-uncased and base-cased embedding.