Gopikrishnan R. Nair;Pragnya S. Nalla;Gokul Krishnan;Anupreetham;Jonghyun Oh;Ahmed Hassan;Injune Yeo;Kishore Kasichainula;Mingoo Seok;Jae-Sun Seo;Yu Cao
{"title":"3-D In-Sensor Computing for Real-Time DVS Data Compression: 65-nm Hardware-Algorithm Co-Design","authors":"Gopikrishnan R. Nair;Pragnya S. Nalla;Gokul Krishnan;Anupreetham;Jonghyun Oh;Ahmed Hassan;Injune Yeo;Kishore Kasichainula;Mingoo Seok;Jae-Sun Seo;Yu Cao","doi":"10.1109/LSSC.2024.3375110","DOIUrl":null,"url":null,"abstract":"Traditional IO links are insufficient to transport high volume of image sensor data, under stringent power and latency constraints. To address this, we demonstrate a low latency, low power in-sensor computing architecture to compress the data from a 3D-stacked dynamic vision sensor (DVS). In this design, we adopt a 4-bit autoencoder algorithm and implement it on an AI computing layer with in-memory computing (IMC) to enable real-time compression of DVS data. To support 3-D integration, this architecture is optimized to handle the unique constraints, including footprint to match the size of the sensor array, low latency to manage the continuous data stream, and low-power consumption to avoid thermal issues. Our prototype chip in 65-nm CMOS demonstrates the new concept of 3-D in-sensor computing, achieving < 6 mW power consumption at 1–10 MHz operating frequency, and\n<inline-formula> <tex-math>$10\\times $ </tex-math></inline-formula>\n compression ratio on \n<inline-formula> <tex-math>$256\\times 256$ </tex-math></inline-formula>\n DVS pixels.","PeriodicalId":13032,"journal":{"name":"IEEE Solid-State Circuits Letters","volume":"7 ","pages":"119-122"},"PeriodicalIF":2.2000,"publicationDate":"2024-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Solid-State Circuits Letters","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10463618/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0
Abstract
Traditional IO links are insufficient to transport high volume of image sensor data, under stringent power and latency constraints. To address this, we demonstrate a low latency, low power in-sensor computing architecture to compress the data from a 3D-stacked dynamic vision sensor (DVS). In this design, we adopt a 4-bit autoencoder algorithm and implement it on an AI computing layer with in-memory computing (IMC) to enable real-time compression of DVS data. To support 3-D integration, this architecture is optimized to handle the unique constraints, including footprint to match the size of the sensor array, low latency to manage the continuous data stream, and low-power consumption to avoid thermal issues. Our prototype chip in 65-nm CMOS demonstrates the new concept of 3-D in-sensor computing, achieving < 6 mW power consumption at 1–10 MHz operating frequency, and
$10\times $
compression ratio on
$256\times 256$
DVS pixels.