{"title":"深度网络的静态和动态解剖","authors":"Titouan Lorieul, Antoine Ghorra, B. Mérialdo","doi":"10.1109/CBMI.2016.7500267","DOIUrl":null,"url":null,"abstract":"Although deep learning has been a major break-through in the recent years, Deep Neural Networks (DNNs) are still the subject of intense research, and many issues remain on how to use them efficiently. In particular, training a Deep Network remains a difficult process, which requires extensive computation, and for which very precise care has to be taken to avoid overfitting, a high risk because of the extremely large number of parameters. The purpose of our work is to perform an autopsy of pre-trained Deep Networks, with the objective of collecting information about the values of the various parameters, and their possible relations and correlations. The motivation is that some of these observations could be later used as a priori knowledge to facilitate the training of new networks, by guiding the exploration of the parameter space into more probable areas. In this paper, we first present a static analysis of the AlexNet Deep Network by computing various statistics on the existing parameter values. Then, we perform a dynamic analysis by measuring the effect of certain modifications of those values on the performance of the network. For example, we show that quantizing the values of the parameters to a small adequate set of values leads to similar performance as the original network. These results suggest that pursuing such studies could lead to the design of improved training procedures for Deep Networks.","PeriodicalId":356608,"journal":{"name":"2016 14th International Workshop on Content-Based Multimedia Indexing (CBMI)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Static and dynamic autopsy of deep networks\",\"authors\":\"Titouan Lorieul, Antoine Ghorra, B. Mérialdo\",\"doi\":\"10.1109/CBMI.2016.7500267\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Although deep learning has been a major break-through in the recent years, Deep Neural Networks (DNNs) are still the subject of intense research, and many issues remain on how to use them efficiently. In particular, training a Deep Network remains a difficult process, which requires extensive computation, and for which very precise care has to be taken to avoid overfitting, a high risk because of the extremely large number of parameters. The purpose of our work is to perform an autopsy of pre-trained Deep Networks, with the objective of collecting information about the values of the various parameters, and their possible relations and correlations. The motivation is that some of these observations could be later used as a priori knowledge to facilitate the training of new networks, by guiding the exploration of the parameter space into more probable areas. In this paper, we first present a static analysis of the AlexNet Deep Network by computing various statistics on the existing parameter values. Then, we perform a dynamic analysis by measuring the effect of certain modifications of those values on the performance of the network. For example, we show that quantizing the values of the parameters to a small adequate set of values leads to similar performance as the original network. These results suggest that pursuing such studies could lead to the design of improved training procedures for Deep Networks.\",\"PeriodicalId\":356608,\"journal\":{\"name\":\"2016 14th International Workshop on Content-Based Multimedia Indexing (CBMI)\",\"volume\":\"9 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-06-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2016 14th International Workshop on Content-Based Multimedia Indexing (CBMI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CBMI.2016.7500267\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 14th International Workshop on Content-Based Multimedia Indexing (CBMI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CBMI.2016.7500267","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Although deep learning has been a major break-through in the recent years, Deep Neural Networks (DNNs) are still the subject of intense research, and many issues remain on how to use them efficiently. In particular, training a Deep Network remains a difficult process, which requires extensive computation, and for which very precise care has to be taken to avoid overfitting, a high risk because of the extremely large number of parameters. The purpose of our work is to perform an autopsy of pre-trained Deep Networks, with the objective of collecting information about the values of the various parameters, and their possible relations and correlations. The motivation is that some of these observations could be later used as a priori knowledge to facilitate the training of new networks, by guiding the exploration of the parameter space into more probable areas. In this paper, we first present a static analysis of the AlexNet Deep Network by computing various statistics on the existing parameter values. Then, we perform a dynamic analysis by measuring the effect of certain modifications of those values on the performance of the network. For example, we show that quantizing the values of the parameters to a small adequate set of values leads to similar performance as the original network. These results suggest that pursuing such studies could lead to the design of improved training procedures for Deep Networks.