{"title":"Improving Paragraph Similarity by Sentence Interaction With BERT","authors":"Xi Jin","doi":"10.1111/exsy.70003","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>Research on semantic similarity between relatively short texts, for example, at word- and sentence-level, has progressed significantly in recent years. However, paragraph-level similarity has not been researched in as much detail owing to the challenges associated with embedding representations, despite its utility in numerous applications. A rudimentary approach to paragraph-level similarity involves treating each paragraph as an elongated sentence, thereby encoding the entire paragraph into a single vector. However, this results in the loss of long-distance dependency information, ignoring interactions between sentences belonging to different paragraphs. In this paper, we propose a simple yet efficient method for estimating paragraph similarity. Given two paragraphs, it first obtains a vector for each sentence by leveraging advanced sentence-embedding techniques. Next, the similarity between each sentence in the first paragraph and the second paragraph is estimated as the maximum cosine similarity value between the sentence and each sentence in the second paragraph. This process is repeated for all sentences in the first paragraph to determine the maximum similarity of each sentence with the second paragraph. Finally, overall paragraph similarity is computed by averaging the maximum cosine similarity values. This method alleviates long-range dependency by embedding sentences individually. In addition, it accounts for sentence-level interactions between the two paragraphs. Experiments conducted on two benchmark data sets demonstrate that the proposed method outperforms the baseline approach that encodes entire paragraphs into single vectors.</p>\n </div>","PeriodicalId":51053,"journal":{"name":"Expert Systems","volume":"42 3","pages":""},"PeriodicalIF":3.0000,"publicationDate":"2025-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Expert Systems","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/exsy.70003","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Research on semantic similarity between relatively short texts, for example, at word- and sentence-level, has progressed significantly in recent years. However, paragraph-level similarity has not been researched in as much detail owing to the challenges associated with embedding representations, despite its utility in numerous applications. A rudimentary approach to paragraph-level similarity involves treating each paragraph as an elongated sentence, thereby encoding the entire paragraph into a single vector. However, this results in the loss of long-distance dependency information, ignoring interactions between sentences belonging to different paragraphs. In this paper, we propose a simple yet efficient method for estimating paragraph similarity. Given two paragraphs, it first obtains a vector for each sentence by leveraging advanced sentence-embedding techniques. Next, the similarity between each sentence in the first paragraph and the second paragraph is estimated as the maximum cosine similarity value between the sentence and each sentence in the second paragraph. This process is repeated for all sentences in the first paragraph to determine the maximum similarity of each sentence with the second paragraph. Finally, overall paragraph similarity is computed by averaging the maximum cosine similarity values. This method alleviates long-range dependency by embedding sentences individually. In addition, it accounts for sentence-level interactions between the two paragraphs. Experiments conducted on two benchmark data sets demonstrate that the proposed method outperforms the baseline approach that encodes entire paragraphs into single vectors.
期刊介绍:
Expert Systems: The Journal of Knowledge Engineering publishes papers dealing with all aspects of knowledge engineering, including individual methods and techniques in knowledge acquisition and representation, and their application in the construction of systems – including expert systems – based thereon. Detailed scientific evaluation is an essential part of any paper.
As well as traditional application areas, such as Software and Requirements Engineering, Human-Computer Interaction, and Artificial Intelligence, we are aiming at the new and growing markets for these technologies, such as Business, Economy, Market Research, and Medical and Health Care. The shift towards this new focus will be marked by a series of special issues covering hot and emergent topics.