2016 IEEE Hot Chips 28 Symposium (HCS)最新文献

筛选
英文 中文
Passive dense stereo vision on the Myriad2 VPU 在Myriad2 VPU上的被动密集立体视觉
2016 IEEE Hot Chips 28 Symposium (HCS) Pub Date : 2016-08-01 DOI: 10.1109/HOTCHIPS.2016.7936240
L. Puglia, M. Ionica, G. Raiconi, D. Moloney
{"title":"Passive dense stereo vision on the Myriad2 VPU","authors":"L. Puglia, M. Ionica, G. Raiconi, D. Moloney","doi":"10.1109/HOTCHIPS.2016.7936240","DOIUrl":"https://doi.org/10.1109/HOTCHIPS.2016.7936240","url":null,"abstract":"This article consists only of a collection of slides from the author's conference presentation.","PeriodicalId":363333,"journal":{"name":"2016 IEEE Hot Chips 28 Symposium (HCS)","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114977315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
VR and AR anytime and everywhere: Contributions of PMD depth sensing to an evolving ecosystem VR和AR随时随地:PMD深度传感对不断发展的生态系统的贡献
2016 IEEE Hot Chips 28 Symposium (HCS) Pub Date : 2016-08-01 DOI: 10.1109/HOTCHIPS.2016.7936195
B. Buxbaum
{"title":"VR and AR anytime and everywhere: Contributions of PMD depth sensing to an evolving ecosystem","authors":"B. Buxbaum","doi":"10.1109/HOTCHIPS.2016.7936195","DOIUrl":"https://doi.org/10.1109/HOTCHIPS.2016.7936195","url":null,"abstract":"","PeriodicalId":363333,"journal":{"name":"2016 IEEE Hot Chips 28 Symposium (HCS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129511570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3D reconstruction with tango 探戈三维重建
2016 IEEE Hot Chips 28 Symposium (HCS) Pub Date : 2016-08-01 DOI: 10.1109/HOTCHIPS.2016.7936198
Ivan Dryanovski
{"title":"3D reconstruction with tango","authors":"Ivan Dryanovski","doi":"10.1109/HOTCHIPS.2016.7936198","DOIUrl":"https://doi.org/10.1109/HOTCHIPS.2016.7936198","url":null,"abstract":"","PeriodicalId":363333,"journal":{"name":"2016 IEEE Hot Chips 28 Symposium (HCS)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130052820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
KiloCore: A 32 nm 1000-processor array KiloCore:一个32nm的1000处理器阵列
2016 IEEE Hot Chips 28 Symposium (HCS) Pub Date : 2016-08-01 DOI: 10.1109/HOTCHIPS.2016.7936218
Brent Bohnenstiehl, Aaron Stillmaker, J. Pimentel, Timothy Andreas, Bin Liu, A. Tran, E. Adeagbo, B. Baas
{"title":"KiloCore: A 32 nm 1000-processor array","authors":"Brent Bohnenstiehl, Aaron Stillmaker, J. Pimentel, Timothy Andreas, Bin Liu, A. Tran, E. Adeagbo, B. Baas","doi":"10.1109/HOTCHIPS.2016.7936218","DOIUrl":"https://doi.org/10.1109/HOTCHIPS.2016.7936218","url":null,"abstract":"","PeriodicalId":363333,"journal":{"name":"2016 IEEE Hot Chips 28 Symposium (HCS)","volume":"403 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126679545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
A 16nm 256-bit wide 89.6GByte/s total bandwidth in-package interconnect with 0.3V swing and 0.062pJ/bit power in InFO package 一个16nm 256位宽89.6GByte/s总带宽封装互连,0.3V摆幅和0.062pJ/bit功率在InFO封装
2016 IEEE Hot Chips 28 Symposium (HCS) Pub Date : 2016-08-01 DOI: 10.1109/HOTCHIPS.2016.7936211
M. Lin, Chien-Chun Tsai, K. Hsieh, Wen-Hung Huang, Yu-Chi Chen, Shu-Chun Yang, Chin-Ming Fu, H. Zhan, Jinn-Yeh Chien, Shao-Yu Li, Y.-H. Chen, C. Kuo, S. Tai, Kazuyoshi Yamada
{"title":"A 16nm 256-bit wide 89.6GByte/s total bandwidth in-package interconnect with 0.3V swing and 0.062pJ/bit power in InFO package","authors":"M. Lin, Chien-Chun Tsai, K. Hsieh, Wen-Hung Huang, Yu-Chi Chen, Shu-Chun Yang, Chin-Ming Fu, H. Zhan, Jinn-Yeh Chien, Shao-Yu Li, Y.-H. Chen, C. Kuo, S. Tai, Kazuyoshi Yamada","doi":"10.1109/HOTCHIPS.2016.7936211","DOIUrl":"https://doi.org/10.1109/HOTCHIPS.2016.7936211","url":null,"abstract":"● An in-package interconnect for in-package memory application in InFO package has been demonstrated ■ Technology: TSMC 16FF + InFO ● 89.6GByte/s total bandwidth is achieved with 256-DQ operating in 2.8Gbit/s and 0.3V-swing ■ Low power: IO (0.062pJ/bit); PHY (0.424pJ/bit) ■ Low latency: Write (4.75T+1.5T=6.25T); Read (2+1.875=3.875T) ● 0.3V signal integrity on the un-probed IO has been clarified ■ 420ps (0.84UI) Eye width; 225mV (75%) Eye height ● Prompt and automatic timing-calibration scheme","PeriodicalId":363333,"journal":{"name":"2016 IEEE Hot Chips 28 Symposium (HCS)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114913990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Deep compression and EIE: Efficient inference engine on compressed deep neural network 深度压缩与EIE:基于压缩深度神经网络的高效推理引擎
2016 IEEE Hot Chips 28 Symposium (HCS) Pub Date : 2016-08-01 DOI: 10.1109/HOTCHIPS.2016.7936226
Song Han, Xingyu Liu, Huizi Mao, Jing Pu, A. Pedram, M. Horowitz, B. Dally
{"title":"Deep compression and EIE: Efficient inference engine on compressed deep neural network","authors":"Song Han, Xingyu Liu, Huizi Mao, Jing Pu, A. Pedram, M. Horowitz, B. Dally","doi":"10.1109/HOTCHIPS.2016.7936226","DOIUrl":"https://doi.org/10.1109/HOTCHIPS.2016.7936226","url":null,"abstract":"This article consists only of a collection of slides from the author's conference presentation.","PeriodicalId":363333,"journal":{"name":"2016 IEEE Hot Chips 28 Symposium (HCS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129097682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 43
100Gbit/s, 120km, PAM 4 based switch to switch, layer 2 silicon photonics based optical interconnects for datacenters 100Gbit/s, 120km,基于PAM 4的交换机对交换机,用于数据中心的基于第2层硅光子学的光互连
2016 IEEE Hot Chips 28 Symposium (HCS) Pub Date : 2016-08-01 DOI: 10.1109/HOTCHIPS.2016.7936212
R. Nagarajan, S. Bhoja, Tom Issenhuth
{"title":"100Gbit/s, 120km, PAM 4 based switch to switch, layer 2 silicon photonics based optical interconnects for datacenters","authors":"R. Nagarajan, S. Bhoja, Tom Issenhuth","doi":"10.1109/HOTCHIPS.2016.7936212","DOIUrl":"https://doi.org/10.1109/HOTCHIPS.2016.7936212","url":null,"abstract":"This article consists only of a collection of slides from the author's conference presentation.","PeriodicalId":363333,"journal":{"name":"2016 IEEE Hot Chips 28 Symposium (HCS)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121096126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A new ×86 core architecture for the next generation of computing 下一代计算的新×86核心架构
2016 IEEE Hot Chips 28 Symposium (HCS) Pub Date : 2016-08-01 DOI: 10.1109/HOTCHIPS.2016.7936224
Mike Clark
{"title":"A new ×86 core architecture for the next generation of computing","authors":"Mike Clark","doi":"10.1109/HOTCHIPS.2016.7936224","DOIUrl":"https://doi.org/10.1109/HOTCHIPS.2016.7936224","url":null,"abstract":"","PeriodicalId":363333,"journal":{"name":"2016 IEEE Hot Chips 28 Symposium (HCS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129747606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
Memory technology and applications 存储技术及应用
2016 IEEE Hot Chips 28 Symposium (HCS) Pub Date : 2016-08-01 DOI: 10.1109/HOTCHIPS.2016.7936173
Allen Rush
{"title":"Memory technology and applications","authors":"Allen Rush","doi":"10.1109/HOTCHIPS.2016.7936173","DOIUrl":"https://doi.org/10.1109/HOTCHIPS.2016.7936173","url":null,"abstract":"◢ Factors driving advanced memory designs - Applications requiring real time video, VR, advanced graphics ◢ Increased CPU/GPU performance - Need for balancing BW, capacity - HBM solutions ◢ Emerging applications with unique memory requirements - ML - training and inference ◢ Novel solutions for PE-Mem structures ◢ Big Data - More data upload - Cloud DL: massive parameter and training data sets","PeriodicalId":363333,"journal":{"name":"2016 IEEE Hot Chips 28 Symposium (HCS)","volume":"307 7","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131923416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
From model to FPGA: Software-hardware co-design for efficient neural network acceleration 从模型到FPGA:高效神经网络加速的软硬件协同设计
2016 IEEE Hot Chips 28 Symposium (HCS) Pub Date : 2016-08-01 DOI: 10.1109/HOTCHIPS.2016.7936208
Kaiyuan Guo, Lingzhi Sui, Jiantao Qiu, Song Yao, Song Han, Yu Wang, Huazhong Yang
{"title":"From model to FPGA: Software-hardware co-design for efficient neural network acceleration","authors":"Kaiyuan Guo, Lingzhi Sui, Jiantao Qiu, Song Yao, Song Han, Yu Wang, Huazhong Yang","doi":"10.1109/HOTCHIPS.2016.7936208","DOIUrl":"https://doi.org/10.1109/HOTCHIPS.2016.7936208","url":null,"abstract":"Artificial neural networks, which dominate artificial intelligence applications such as object recognition and speech recognition, are in evolution. To apply neural networks to wider applications, customized hardware are necessary since CPU and GPU are not efficient enough. FPGA can be an ideal platform for neural network acceleration since it is programmable and can achieve much higher energy efficiency compared with general-purpose processors. However, the long development period and insufficient performance of traditional FPGA acceleration solutions prevent it from wide utilization. In this work, we propose a complete design flow to achieve both fast deployment and high energy efficiency for accelerating neural networks on FPGA. Deep compression and data quantization are employed to exploit the redundancy in algorithm and reduce both computational and memory complexity. Two architecture designs for CNN and DNN/RNN will be introduced together with the compilation environment. Evaluated on Xilinx Zynq 7000 and Kintex Ultrascale series FPGA with realworld neural networks, up to 10 times higher energy efficiency can be achieved compared with mobile GPU and desktop GPU.","PeriodicalId":363333,"journal":{"name":"2016 IEEE Hot Chips 28 Symposium (HCS)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132296147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 42
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信