推动机器学习的未来发展

O. Temam
{"title":"推动机器学习的未来发展","authors":"O. Temam","doi":"10.1109/VLSIC.2016.7573457","DOIUrl":null,"url":null,"abstract":"Amazing progress in machine-learning, largely based on deep neural networks, has started to make applications once considered impossible, such as real-time translation or self-driving cars, a reality. However, even if, on some restricted problems, machine-learning is getting close to human-level performance, we are still far from the capabilities of the human brain. Machine-learning researchers themselves acknowledge that the progress observed in the past 10 years has been largely due to rapid increase in computing performance, allowing to tackle larger neural networks and larger training sets. So the computer systems and circuits communities can play a very significant role in enabling future progress. While GPUs have been a major driver of this recent progress, both the slowing rate of improvement of standard CMOS technology and the need for even faster progress suggest to at least explore alternative approaches. In this talk, we will discuss lessons learned from research on architectures for machine-learning, and that some of the hurdles ahead largely lie at the circuit level, but can possibly be overcome in the near future.","PeriodicalId":6512,"journal":{"name":"2016 IEEE Symposium on VLSI Circuits (VLSI-Circuits)","volume":"28 1","pages":"1-3"},"PeriodicalIF":0.0000,"publicationDate":"2016-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"Enabling future progress in machine-learning\",\"authors\":\"O. Temam\",\"doi\":\"10.1109/VLSIC.2016.7573457\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Amazing progress in machine-learning, largely based on deep neural networks, has started to make applications once considered impossible, such as real-time translation or self-driving cars, a reality. However, even if, on some restricted problems, machine-learning is getting close to human-level performance, we are still far from the capabilities of the human brain. Machine-learning researchers themselves acknowledge that the progress observed in the past 10 years has been largely due to rapid increase in computing performance, allowing to tackle larger neural networks and larger training sets. So the computer systems and circuits communities can play a very significant role in enabling future progress. While GPUs have been a major driver of this recent progress, both the slowing rate of improvement of standard CMOS technology and the need for even faster progress suggest to at least explore alternative approaches. In this talk, we will discuss lessons learned from research on architectures for machine-learning, and that some of the hurdles ahead largely lie at the circuit level, but can possibly be overcome in the near future.\",\"PeriodicalId\":6512,\"journal\":{\"name\":\"2016 IEEE Symposium on VLSI Circuits (VLSI-Circuits)\",\"volume\":\"28 1\",\"pages\":\"1-3\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-06-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2016 IEEE Symposium on VLSI Circuits (VLSI-Circuits)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/VLSIC.2016.7573457\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 IEEE Symposium on VLSI Circuits (VLSI-Circuits)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/VLSIC.2016.7573457","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

摘要

机器学习的惊人进步主要基于深度神经网络,已经开始使曾经被认为不可能的应用,如实时翻译或自动驾驶汽车,成为现实。然而,即使在一些有限的问题上,机器学习正在接近人类水平的表现,我们仍然离人类大脑的能力很远。机器学习研究人员自己也承认,过去10年观察到的进步主要是由于计算性能的快速提高,从而可以处理更大的神经网络和更大的训练集。因此,计算机系统和电路社区可以在实现未来的进步中发挥非常重要的作用。虽然gpu一直是这一最新进展的主要驱动力,但标准CMOS技术的改进速度放缓以及对更快进展的需求都表明,至少要探索替代方法。在本次演讲中,我们将讨论从机器学习架构研究中获得的经验教训,以及未来的一些障碍主要存在于电路层面,但在不久的将来可能会被克服。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Enabling future progress in machine-learning
Amazing progress in machine-learning, largely based on deep neural networks, has started to make applications once considered impossible, such as real-time translation or self-driving cars, a reality. However, even if, on some restricted problems, machine-learning is getting close to human-level performance, we are still far from the capabilities of the human brain. Machine-learning researchers themselves acknowledge that the progress observed in the past 10 years has been largely due to rapid increase in computing performance, allowing to tackle larger neural networks and larger training sets. So the computer systems and circuits communities can play a very significant role in enabling future progress. While GPUs have been a major driver of this recent progress, both the slowing rate of improvement of standard CMOS technology and the need for even faster progress suggest to at least explore alternative approaches. In this talk, we will discuss lessons learned from research on architectures for machine-learning, and that some of the hurdles ahead largely lie at the circuit level, but can possibly be overcome in the near future.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信