{"title":"Bayesian Inference of Hidden Markov Models Through Probabilistic Boolean Operations in Spiking Neuronal Networks","authors":"Ayan Chakraborty;Saswat Chakrabarti","doi":"10.1109/TETCI.2024.3502472","DOIUrl":null,"url":null,"abstract":"Recurrent neural networks (RNN) have been extensively used to address the problem of Bayesian inference of a hidden Markov model (HMM). However, such artificial neural architectures are prone to computationally exhaustive training procedures and high energy dissipation. Spiking neural networks (SNNs) are recently explored for performing similar tasks. An interesting problem on Bayesian inference of hidden Markov models (HMM) on SNN paradigm is addressed in this paper. A population based stochastic temporal encoding (PSTE) scheme has been introduced to establish that a spiking neuron behaves as a probabilistic Boolean operator. Using this property the posterior of a hidden state is mapped to probability of firing a logic <inline-formula><tex-math>$HIGH$</tex-math></inline-formula> by a spiking neuron. Two new algorithms are presented for fixing synaptic strengths denoted by a random variable <inline-formula><tex-math>$q$</tex-math></inline-formula>. The first algorithm uses a sigmoidal relationship from pre-statistical analysis to select the values of <inline-formula><tex-math>$q$</tex-math></inline-formula> such that the probability of a neuron producing a logic HIGH becomes equal to the posterior probability of a hidden state. The second algorithm considers data for appropriately determining <inline-formula><tex-math>$q$</tex-math></inline-formula> through in-network-training. It has been demonstrated that Bayesian inference of both two-state HMMs as well as multi-state HMMs are implementable using the concept of PSTE. Two examples are presented, one on inferring the trend of a time series and the other related to deciphering the correct digit of a seven segment LED display with noisy bits. Our framework has performed very closely with traditional Bayesian inference (difference in accuracy <inline-formula><tex-math>$< 2\\%$</tex-math></inline-formula>) and traditional RNNs.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 3","pages":"2618-2632"},"PeriodicalIF":5.3000,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Emerging Topics in Computational Intelligence","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10803002/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Recurrent neural networks (RNN) have been extensively used to address the problem of Bayesian inference of a hidden Markov model (HMM). However, such artificial neural architectures are prone to computationally exhaustive training procedures and high energy dissipation. Spiking neural networks (SNNs) are recently explored for performing similar tasks. An interesting problem on Bayesian inference of hidden Markov models (HMM) on SNN paradigm is addressed in this paper. A population based stochastic temporal encoding (PSTE) scheme has been introduced to establish that a spiking neuron behaves as a probabilistic Boolean operator. Using this property the posterior of a hidden state is mapped to probability of firing a logic $HIGH$ by a spiking neuron. Two new algorithms are presented for fixing synaptic strengths denoted by a random variable $q$. The first algorithm uses a sigmoidal relationship from pre-statistical analysis to select the values of $q$ such that the probability of a neuron producing a logic HIGH becomes equal to the posterior probability of a hidden state. The second algorithm considers data for appropriately determining $q$ through in-network-training. It has been demonstrated that Bayesian inference of both two-state HMMs as well as multi-state HMMs are implementable using the concept of PSTE. Two examples are presented, one on inferring the trend of a time series and the other related to deciphering the correct digit of a seven segment LED display with noisy bits. Our framework has performed very closely with traditional Bayesian inference (difference in accuracy $< 2\%$) and traditional RNNs.
期刊介绍:
The IEEE Transactions on Emerging Topics in Computational Intelligence (TETCI) publishes original articles on emerging aspects of computational intelligence, including theory, applications, and surveys.
TETCI is an electronics only publication. TETCI publishes six issues per year.
Authors are encouraged to submit manuscripts in any emerging topic in computational intelligence, especially nature-inspired computing topics not covered by other IEEE Computational Intelligence Society journals. A few such illustrative examples are glial cell networks, computational neuroscience, Brain Computer Interface, ambient intelligence, non-fuzzy computing with words, artificial life, cultural learning, artificial endocrine networks, social reasoning, artificial hormone networks, computational intelligence for the IoT and Smart-X technologies.