{"title":"IR meets NLP: On the Semantic Similarity between Subject-Verb-Object Phrases","authors":"Dmitrijs Milajevs, M. Sadrzadeh, T. Roelleke","doi":"10.1145/2808194.2809448","DOIUrl":null,"url":null,"abstract":"Measuring the semantic similarity between phrases and sentences is an important task in natural language processing (NLP) and information retrieval (IR). We compare the quality of the distributional semantic NLP models against phrase-based semantic IR. The evaluation is based on the correlation between human judgements and model scores on a distributional phrase similarity task. We experiment with four NLP and two IR model variants. On the NLP side, models vary over normalization schemes and composition operators. On the IR side, models vary with respect to estimation of the probability of a term being in a document, namely P(t|d) where only term co-occurrence information is used and P(t|d, sim) which incorporates term distributional similarity. A mixture of the two methods is presented and evaluated. For both methods, word meanings are derived from large corpora of data: the BNC and ukWaC. One of the main findings is that grammatical distributional models give better scores than the IR models. This suggests that an IR model enriched with distributional linguistic information performs better in the long standing problem in IR of document retrieval where there is no direct symbolic relationship between query and document concepts.","PeriodicalId":440325,"journal":{"name":"Proceedings of the 2015 International Conference on The Theory of Information Retrieval","volume":"387 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2015 International Conference on The Theory of Information Retrieval","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2808194.2809448","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8
Abstract
Measuring the semantic similarity between phrases and sentences is an important task in natural language processing (NLP) and information retrieval (IR). We compare the quality of the distributional semantic NLP models against phrase-based semantic IR. The evaluation is based on the correlation between human judgements and model scores on a distributional phrase similarity task. We experiment with four NLP and two IR model variants. On the NLP side, models vary over normalization schemes and composition operators. On the IR side, models vary with respect to estimation of the probability of a term being in a document, namely P(t|d) where only term co-occurrence information is used and P(t|d, sim) which incorporates term distributional similarity. A mixture of the two methods is presented and evaluated. For both methods, word meanings are derived from large corpora of data: the BNC and ukWaC. One of the main findings is that grammatical distributional models give better scores than the IR models. This suggests that an IR model enriched with distributional linguistic information performs better in the long standing problem in IR of document retrieval where there is no direct symbolic relationship between query and document concepts.