{"title":"Enhanced BERT Approach to Score Arabic Essay’s Relevance to the Prompt","authors":"Rim Aroua Machhout, C. Ben Othmane Zribi","doi":"10.5171/2024.176992","DOIUrl":null,"url":null,"abstract":"In recent years, automated essay scoring systems have seen significant progress, particularly with the integration of deep learning algorithms. This shift marks a move away from the traditional focus on style and grammar to a more in-depth analysis of text content. Despite these advancements, there remains a limited exploration of the essay’s relevance to the prompts, especially in the context of the Arabic language. In response to this lack, we propose a novel approach for scoring the relevance between essays and prompts. Specifically, our aim is to assign a score reflecting the degree of adequacy of the student’s long answer to the open-ended question. Our Arabic-language proposal builds upon AraBERT, the Arabic version of BERT, and enhanced with specially developed handcrafted features. On a positive note, our approach yielded promising results, showing a correlation rate of 0.88 with human scores.","PeriodicalId":187676,"journal":{"name":"Communications of the IBIMA","volume":"9 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Communications of the IBIMA","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5171/2024.176992","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In recent years, automated essay scoring systems have seen significant progress, particularly with the integration of deep learning algorithms. This shift marks a move away from the traditional focus on style and grammar to a more in-depth analysis of text content. Despite these advancements, there remains a limited exploration of the essay’s relevance to the prompts, especially in the context of the Arabic language. In response to this lack, we propose a novel approach for scoring the relevance between essays and prompts. Specifically, our aim is to assign a score reflecting the degree of adequacy of the student’s long answer to the open-ended question. Our Arabic-language proposal builds upon AraBERT, the Arabic version of BERT, and enhanced with specially developed handcrafted features. On a positive note, our approach yielded promising results, showing a correlation rate of 0.88 with human scores.