{"title":"A Detailed Comparative Analysis of Automatic Neural Metrics for Machine Translation: BLEURT & BERTScore","authors":"Aniruddha Mukherjee;Vikas Hassija;Vinay Chamola;Karunesh Kumar Gupta","doi":"10.1109/OJCS.2025.3560333","DOIUrl":null,"url":null,"abstract":"<sc><b>Bleurt</b></small> is a recently introduced metric that employs <sc>Bert</small>, a potent pre-trained language model to assess how well candidate translations compare to a reference translation in the context of machine translation outputs. While traditional metrics like<sc>Bleu</small> rely on lexical similarities, <sc>Bleurt</small> leverages <sc>Bert</small>’s semantic and syntactic capabilities to provide more robust evaluation through complex text representations. However, studies have shown that <sc>Bert</small>, despite its impressive performance in natural language processing tasks can sometimes deviate from human judgment, particularly in specific syntactic and semantic scenarios. Through systematic experimental analysis at the word level, including categorization of errors such as lexical mismatches, untranslated terms, and structural inconsistencies, we investigate how <sc>Bleurt</small> handles various translation challenges. Our study addresses three central questions: What are the strengths and weaknesses of <sc>Bleurt</small>, how do they align with <sc>Bert</small>’s known limitations, and how does it compare with the similar automatic neural metric for machine translation, <sc>BERTScore</small>? Using manually annotated datasets that emphasize different error types and linguistic phenomena, we find that <sc>Bleurt</small> excels at identifying nuanced differences between sentences with high overlap, an area where <sc>BERTScore</small> shows limitations. Our systematic experiments, provide insights for their effective application in machine translation evaluation.","PeriodicalId":13205,"journal":{"name":"IEEE Open Journal of the Computer Society","volume":"6 ","pages":"658-668"},"PeriodicalIF":0.0000,"publicationDate":"2025-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10964149","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Open Journal of the Computer Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10964149/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Bleurt is a recently introduced metric that employs Bert, a potent pre-trained language model to assess how well candidate translations compare to a reference translation in the context of machine translation outputs. While traditional metrics likeBleu rely on lexical similarities, Bleurt leverages Bert’s semantic and syntactic capabilities to provide more robust evaluation through complex text representations. However, studies have shown that Bert, despite its impressive performance in natural language processing tasks can sometimes deviate from human judgment, particularly in specific syntactic and semantic scenarios. Through systematic experimental analysis at the word level, including categorization of errors such as lexical mismatches, untranslated terms, and structural inconsistencies, we investigate how Bleurt handles various translation challenges. Our study addresses three central questions: What are the strengths and weaknesses of Bleurt, how do they align with Bert’s known limitations, and how does it compare with the similar automatic neural metric for machine translation, BERTScore? Using manually annotated datasets that emphasize different error types and linguistic phenomena, we find that Bleurt excels at identifying nuanced differences between sentences with high overlap, an area where BERTScore shows limitations. Our systematic experiments, provide insights for their effective application in machine translation evaluation.