A. Andreou, Andrew A. Dykman, Kate D. Fischl, Guillaume Garreau, Daniel R. Mendat, G. Orchard, A. Cassidy, P. Merolla, J. Arthur, Rodrigo Alvarez-Icaza, Bryan L. Jackson, D. Modha
{"title":"Real-time sensory information processing using the TrueNorth Neurosynaptic System","authors":"A. Andreou, Andrew A. Dykman, Kate D. Fischl, Guillaume Garreau, Daniel R. Mendat, G. Orchard, A. Cassidy, P. Merolla, J. Arthur, Rodrigo Alvarez-Icaza, Bryan L. Jackson, D. Modha","doi":"10.1109/ISCAS.2016.7539214","DOIUrl":null,"url":null,"abstract":"Summary form only given. The IBM TrueNorth (TN) Neurosynaptic System, is a chip multi processor with a tightly coupled processor/memory architecture, that results in energy efficient neurocomputing and it is a significant milestone to over 30 years of neuromorphic engineering! It comprises of 4096 cores each core with 65K of local memory (6T SRAM)-synapses- and 256 arithmetic logic units - neurons-that operate on a unary number representation and compute by counting up to a maximum of 19 bits. The cores are event-driven using custom asynchronous and synchronous logic, and they are globally connected through an asynchronous packet switched mesh network on chip (NOC). The chip development board, includes a Zyng Xilinx FPGA that does the housekeeping and provides support for standard communication support through an Ethernet UDP interface. The asynchronous Addressed Event Representation (AER) in the NOC is al so exposed to the user for connection to AER based peripherals through a packet with bundled data full duplex interface. The unary data values represented on the system buses can take on a wide variety of spatial and temporal encoding schemes. Pulse density coding (the number of events Ne represents a number N), thermometer coding, time-slot encoding, and stochastic encoding are examples. Additional low level interfaces are available for communicating directly with the TrueNorth chip to aid programming and parameter setting. A hierarchical, compositional programming language, Corelet, is available to aid the development of TN applications. IBM provides support and a development system as well as “Compass” a scalable simulator. The software environment runs under standard Linux installations (Red Hat, CentOS and Ubuntu) and has standard interfaces to Matlab and to Caffe that is employed to train deep neural network models. The TN architecture can be interfaced using native AER to a number of bio-inspired sensory devices developed over many years of neuromorphic engineering (silicon retinas and silicon cochleas). In addition the architecture is well suited for implementing deep neural networks with many applications in computer vision, speech recognition and language processing. In a sensory information processing system architecture one desires both pattern processing in space and time to extract features in symbolic sub-spaces as well as natural language processing to provide contextual and semantic information in the form of priors. In this paper we discuss results from ongoing experimental work on real-time sensory information processing using the TN architecture in three different areas (i) spatial pattern processing -computer vision(ii) temporal pattern processing -speech processing and recognition(iii) natural language processing -word similarity-. A real-time demonstration will be done at ISCAS 2016 using the TN system and neuromorphic event based sensors for audition (silicon cochlea) and vision (silicon retina).","PeriodicalId":6546,"journal":{"name":"2016 IEEE International Symposium on Circuits and Systems (ISCAS)","volume":"26 1","pages":"2911-2911"},"PeriodicalIF":0.0000,"publicationDate":"2016-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"12","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 IEEE International Symposium on Circuits and Systems (ISCAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISCAS.2016.7539214","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 12
Abstract
Summary form only given. The IBM TrueNorth (TN) Neurosynaptic System, is a chip multi processor with a tightly coupled processor/memory architecture, that results in energy efficient neurocomputing and it is a significant milestone to over 30 years of neuromorphic engineering! It comprises of 4096 cores each core with 65K of local memory (6T SRAM)-synapses- and 256 arithmetic logic units - neurons-that operate on a unary number representation and compute by counting up to a maximum of 19 bits. The cores are event-driven using custom asynchronous and synchronous logic, and they are globally connected through an asynchronous packet switched mesh network on chip (NOC). The chip development board, includes a Zyng Xilinx FPGA that does the housekeeping and provides support for standard communication support through an Ethernet UDP interface. The asynchronous Addressed Event Representation (AER) in the NOC is al so exposed to the user for connection to AER based peripherals through a packet with bundled data full duplex interface. The unary data values represented on the system buses can take on a wide variety of spatial and temporal encoding schemes. Pulse density coding (the number of events Ne represents a number N), thermometer coding, time-slot encoding, and stochastic encoding are examples. Additional low level interfaces are available for communicating directly with the TrueNorth chip to aid programming and parameter setting. A hierarchical, compositional programming language, Corelet, is available to aid the development of TN applications. IBM provides support and a development system as well as “Compass” a scalable simulator. The software environment runs under standard Linux installations (Red Hat, CentOS and Ubuntu) and has standard interfaces to Matlab and to Caffe that is employed to train deep neural network models. The TN architecture can be interfaced using native AER to a number of bio-inspired sensory devices developed over many years of neuromorphic engineering (silicon retinas and silicon cochleas). In addition the architecture is well suited for implementing deep neural networks with many applications in computer vision, speech recognition and language processing. In a sensory information processing system architecture one desires both pattern processing in space and time to extract features in symbolic sub-spaces as well as natural language processing to provide contextual and semantic information in the form of priors. In this paper we discuss results from ongoing experimental work on real-time sensory information processing using the TN architecture in three different areas (i) spatial pattern processing -computer vision(ii) temporal pattern processing -speech processing and recognition(iii) natural language processing -word similarity-. A real-time demonstration will be done at ISCAS 2016 using the TN system and neuromorphic event based sensors for audition (silicon cochlea) and vision (silicon retina).