Jugurta Montalvão, Gabriel Bastos, Rodrigo Sousa, Ataíde Gualberto
{"title":"稀疏随机矩阵的状态嵌入表示","authors":"Jugurta Montalvão, Gabriel Bastos, Rodrigo Sousa, Ataíde Gualberto","doi":"10.1016/j.patrec.2025.04.011","DOIUrl":null,"url":null,"abstract":"<div><div>Embeddings are adjusted to allow points representing states and observations in Markov models, where conditional probabilities are approximately encoded as the exponential of (negative) distances, jointly scaled by a density factor. It is shown that the goodness of this approximation can be managed, mainly if the embedding dimension is chosen in function of entropies associated to the corresponding Markov model. Therefore, for sparse (low entropy) models, their representation as state embeddings can save memory and allow fully geometric versions of probabilistic algorithms, as the Viterbi, taken as an example in this work. Besides, evidences are also gathered in favor of potentially useful properties that emerge from the geometric representation of Markov models, such as analogies, superstates (aggregation) and semantic fields.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"193 ","pages":"Pages 71-78"},"PeriodicalIF":3.9000,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"On the representation of sparse stochastic matrices with state embedding\",\"authors\":\"Jugurta Montalvão, Gabriel Bastos, Rodrigo Sousa, Ataíde Gualberto\",\"doi\":\"10.1016/j.patrec.2025.04.011\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Embeddings are adjusted to allow points representing states and observations in Markov models, where conditional probabilities are approximately encoded as the exponential of (negative) distances, jointly scaled by a density factor. It is shown that the goodness of this approximation can be managed, mainly if the embedding dimension is chosen in function of entropies associated to the corresponding Markov model. Therefore, for sparse (low entropy) models, their representation as state embeddings can save memory and allow fully geometric versions of probabilistic algorithms, as the Viterbi, taken as an example in this work. Besides, evidences are also gathered in favor of potentially useful properties that emerge from the geometric representation of Markov models, such as analogies, superstates (aggregation) and semantic fields.</div></div>\",\"PeriodicalId\":54638,\"journal\":{\"name\":\"Pattern Recognition Letters\",\"volume\":\"193 \",\"pages\":\"Pages 71-78\"},\"PeriodicalIF\":3.9000,\"publicationDate\":\"2025-04-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Pattern Recognition Letters\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0167865525001448\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pattern Recognition Letters","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167865525001448","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
On the representation of sparse stochastic matrices with state embedding
Embeddings are adjusted to allow points representing states and observations in Markov models, where conditional probabilities are approximately encoded as the exponential of (negative) distances, jointly scaled by a density factor. It is shown that the goodness of this approximation can be managed, mainly if the embedding dimension is chosen in function of entropies associated to the corresponding Markov model. Therefore, for sparse (low entropy) models, their representation as state embeddings can save memory and allow fully geometric versions of probabilistic algorithms, as the Viterbi, taken as an example in this work. Besides, evidences are also gathered in favor of potentially useful properties that emerge from the geometric representation of Markov models, such as analogies, superstates (aggregation) and semantic fields.
期刊介绍:
Pattern Recognition Letters aims at rapid publication of concise articles of a broad interest in pattern recognition.
Subject areas include all the current fields of interest represented by the Technical Committees of the International Association of Pattern Recognition, and other developing themes involving learning and recognition.