{"title":"关于深度网络的平滑","authors":"Vincent Roulet, Zaïd Harchaoui","doi":"10.1109/CISS50987.2021.9400285","DOIUrl":null,"url":null,"abstract":"Many popular deep neural networks implement an input-output mapping that is non-smooth with respect to the network parameters. This non-smoothness may have contributed to the difficulty of analyzing deep learning theoretically. Sophisticated approaches have recently been proposed to address this specific difficulty. In this note, we explore a simple approach consisting instead in smoothing the input-output mapping. We show how to perform smoothing automatically within a differentiable programming framework. The impact of the smoothing on the convergence behavior can then be automatically controlled. We illustrate our approach with numerical examples using multilayer perceptrons.","PeriodicalId":228112,"journal":{"name":"2021 55th Annual Conference on Information Sciences and Systems (CISS)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"On the Smoothing of Deep Networks\",\"authors\":\"Vincent Roulet, Zaïd Harchaoui\",\"doi\":\"10.1109/CISS50987.2021.9400285\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Many popular deep neural networks implement an input-output mapping that is non-smooth with respect to the network parameters. This non-smoothness may have contributed to the difficulty of analyzing deep learning theoretically. Sophisticated approaches have recently been proposed to address this specific difficulty. In this note, we explore a simple approach consisting instead in smoothing the input-output mapping. We show how to perform smoothing automatically within a differentiable programming framework. The impact of the smoothing on the convergence behavior can then be automatically controlled. We illustrate our approach with numerical examples using multilayer perceptrons.\",\"PeriodicalId\":228112,\"journal\":{\"name\":\"2021 55th Annual Conference on Information Sciences and Systems (CISS)\",\"volume\":\"12 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-03-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 55th Annual Conference on Information Sciences and Systems (CISS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CISS50987.2021.9400285\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 55th Annual Conference on Information Sciences and Systems (CISS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CISS50987.2021.9400285","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Many popular deep neural networks implement an input-output mapping that is non-smooth with respect to the network parameters. This non-smoothness may have contributed to the difficulty of analyzing deep learning theoretically. Sophisticated approaches have recently been proposed to address this specific difficulty. In this note, we explore a simple approach consisting instead in smoothing the input-output mapping. We show how to perform smoothing automatically within a differentiable programming framework. The impact of the smoothing on the convergence behavior can then be automatically controlled. We illustrate our approach with numerical examples using multilayer perceptrons.