{"title":"A Deep Learning Method for Sentence Embeddings Based on Hadamard Matrix Encodings","authors":"Mircea Trifan, B. Ionescu, D. Ionescu","doi":"10.1109/SACI55618.2022.9919604","DOIUrl":null,"url":null,"abstract":"Sentence Embedding is recently getting an accrued attention from the Natural Language Processing (NLP) community. An embedding maps a sentence to a vector of real numbers with applications to similarity and inference tasks. Our method uses: word embeddings, dependency parsing, Hadamard matrix with spread spectrum algorithm and a deep learning neural network trained on the Sentences Involving Compositional Knowledge (SICK) corpus. The dependency parsing labels are associated with rows in a Hadamard matrix. Words embeddings are stored at corresponding rows in another matrix. Using the spread spectrum encoding algorithm the two matrices are combined into a single unidimensional vector. This embedding is then fed to a neural network achieving 80% accuracy while the best score from the SEMEVAL 2014 competition is 84%. The advantages of this method stem from encoding of any sentence size, using only fully connected neural networks, tacking into account the word order and handling long range word dependencies.","PeriodicalId":105691,"journal":{"name":"2022 IEEE 16th International Symposium on Applied Computational Intelligence and Informatics (SACI)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 16th International Symposium on Applied Computational Intelligence and Informatics (SACI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SACI55618.2022.9919604","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Sentence Embedding is recently getting an accrued attention from the Natural Language Processing (NLP) community. An embedding maps a sentence to a vector of real numbers with applications to similarity and inference tasks. Our method uses: word embeddings, dependency parsing, Hadamard matrix with spread spectrum algorithm and a deep learning neural network trained on the Sentences Involving Compositional Knowledge (SICK) corpus. The dependency parsing labels are associated with rows in a Hadamard matrix. Words embeddings are stored at corresponding rows in another matrix. Using the spread spectrum encoding algorithm the two matrices are combined into a single unidimensional vector. This embedding is then fed to a neural network achieving 80% accuracy while the best score from the SEMEVAL 2014 competition is 84%. The advantages of this method stem from encoding of any sentence size, using only fully connected neural networks, tacking into account the word order and handling long range word dependencies.