Thanh Duong , Tuan-Dung Le , Ho’omana Nathan Horton , Stephanie Link , Thanh Thieu
{"title":"ParKQ:兼顾语义相似性和词汇多样性的自动转述质量度量法","authors":"Thanh Duong , Tuan-Dung Le , Ho’omana Nathan Horton , Stephanie Link , Thanh Thieu","doi":"10.1016/j.nlp.2024.100054","DOIUrl":null,"url":null,"abstract":"<div><p>BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019) have set new state-of-the-art performance on paraphrase quality measurement. However, their main focus is on semantic similarity and lack the lexical diversity between two sentences. LexDivPara (Thieu et al., 2022) introduced a method that combines semantic similarity and lexical diversity, but the method is dependent on a human-provided semantic score to enhance its overall performance. In this work, we present <strong>ParKQ</strong> (<u>Par</u>aphrase ran<u>K</u>ing <u>Q</u>uality), a fully automatic method for measuring the holistic quality of sentential paraphrases. We create a semantic similarity ensemble model by combining the most popular adaptation of the pre-trained BERT (Devlin et al., 2019) network: BLEURT (Sellam et al., 2020), BERTSCORE (Zhang et al., 2020) and Sentence-BERT (Reimers et al., 2019). Then we build paraphrase quality learning-to-rank models with XGBoost (Chen et al., 2016) and TFranking (Pasumarthi et al., 2019) by combining the ensemble semantic score with lexical features including edit distance, BLEU, and ROUGE. To analyze and evaluate the intricate paraphrase quality measure, we create a gold-standard dataset using expert linguistic coding. The gold-standard annotation comprises four linguistic scores (semantic, lexical, grammatical, overall) and spans across three heterogeneous datasets commonly used to benchmark paraphrasing tasks: STS Benchmark,<span><sup>1</sup></span> ParaBank Evaluation<span><sup>2</sup></span> and MSR corpus.<span><sup>3</sup></span> Our <strong>ParKQ</strong> models demonstrate robust correlation with all linguistic scores, making it the first practical tool for measuring the holistic quality (semantic similarity + lexical diversity) of sentential paraphrases. In evaluation, we compare our models against contemporary methods with the ability to generate holistic quality scores for paraphrases including LexDivPara, ParaScore, and the emergent ChatGPT.</p></div>","PeriodicalId":100944,"journal":{"name":"Natural Language Processing Journal","volume":"6 ","pages":"Article 100054"},"PeriodicalIF":0.0000,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949719124000025/pdfft?md5=2d9dd7ca1e2b2de847f402ea46f05f27&pid=1-s2.0-S2949719124000025-main.pdf","citationCount":"0","resultStr":"{\"title\":\"ParKQ: An automated Paraphrase ranKing Quality measure that balances semantic similarity with lexical diversity\",\"authors\":\"Thanh Duong , Tuan-Dung Le , Ho’omana Nathan Horton , Stephanie Link , Thanh Thieu\",\"doi\":\"10.1016/j.nlp.2024.100054\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019) have set new state-of-the-art performance on paraphrase quality measurement. However, their main focus is on semantic similarity and lack the lexical diversity between two sentences. LexDivPara (Thieu et al., 2022) introduced a method that combines semantic similarity and lexical diversity, but the method is dependent on a human-provided semantic score to enhance its overall performance. In this work, we present <strong>ParKQ</strong> (<u>Par</u>aphrase ran<u>K</u>ing <u>Q</u>uality), a fully automatic method for measuring the holistic quality of sentential paraphrases. We create a semantic similarity ensemble model by combining the most popular adaptation of the pre-trained BERT (Devlin et al., 2019) network: BLEURT (Sellam et al., 2020), BERTSCORE (Zhang et al., 2020) and Sentence-BERT (Reimers et al., 2019). Then we build paraphrase quality learning-to-rank models with XGBoost (Chen et al., 2016) and TFranking (Pasumarthi et al., 2019) by combining the ensemble semantic score with lexical features including edit distance, BLEU, and ROUGE. To analyze and evaluate the intricate paraphrase quality measure, we create a gold-standard dataset using expert linguistic coding. The gold-standard annotation comprises four linguistic scores (semantic, lexical, grammatical, overall) and spans across three heterogeneous datasets commonly used to benchmark paraphrasing tasks: STS Benchmark,<span><sup>1</sup></span> ParaBank Evaluation<span><sup>2</sup></span> and MSR corpus.<span><sup>3</sup></span> Our <strong>ParKQ</strong> models demonstrate robust correlation with all linguistic scores, making it the first practical tool for measuring the holistic quality (semantic similarity + lexical diversity) of sentential paraphrases. In evaluation, we compare our models against contemporary methods with the ability to generate holistic quality scores for paraphrases including LexDivPara, ParaScore, and the emergent ChatGPT.</p></div>\",\"PeriodicalId\":100944,\"journal\":{\"name\":\"Natural Language Processing Journal\",\"volume\":\"6 \",\"pages\":\"Article 100054\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-02-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S2949719124000025/pdfft?md5=2d9dd7ca1e2b2de847f402ea46f05f27&pid=1-s2.0-S2949719124000025-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Natural Language Processing Journal\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2949719124000025\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Natural Language Processing Journal","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2949719124000025","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
ParKQ: An automated Paraphrase ranKing Quality measure that balances semantic similarity with lexical diversity
BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019) have set new state-of-the-art performance on paraphrase quality measurement. However, their main focus is on semantic similarity and lack the lexical diversity between two sentences. LexDivPara (Thieu et al., 2022) introduced a method that combines semantic similarity and lexical diversity, but the method is dependent on a human-provided semantic score to enhance its overall performance. In this work, we present ParKQ (Paraphrase ranKing Quality), a fully automatic method for measuring the holistic quality of sentential paraphrases. We create a semantic similarity ensemble model by combining the most popular adaptation of the pre-trained BERT (Devlin et al., 2019) network: BLEURT (Sellam et al., 2020), BERTSCORE (Zhang et al., 2020) and Sentence-BERT (Reimers et al., 2019). Then we build paraphrase quality learning-to-rank models with XGBoost (Chen et al., 2016) and TFranking (Pasumarthi et al., 2019) by combining the ensemble semantic score with lexical features including edit distance, BLEU, and ROUGE. To analyze and evaluate the intricate paraphrase quality measure, we create a gold-standard dataset using expert linguistic coding. The gold-standard annotation comprises four linguistic scores (semantic, lexical, grammatical, overall) and spans across three heterogeneous datasets commonly used to benchmark paraphrasing tasks: STS Benchmark,1 ParaBank Evaluation2 and MSR corpus.3 Our ParKQ models demonstrate robust correlation with all linguistic scores, making it the first practical tool for measuring the holistic quality (semantic similarity + lexical diversity) of sentential paraphrases. In evaluation, we compare our models against contemporary methods with the ability to generate holistic quality scores for paraphrases including LexDivPara, ParaScore, and the emergent ChatGPT.