{"title":"图像识别的正则化学习","authors":"Xinjie Lan, K. Barner","doi":"10.1109/ICMLA.2019.00010","DOIUrl":null,"url":null,"abstract":"In order to reduce overfitting for the image recognition application, this paper proposes a novel regularization learning algorithm for deep learning. Above all, we propose a novel probabilistic representation for explaining the architecture of Deep Neural Networks (DNNs), which demonstrates that the hidden layers close to the input formulate prior distributions, thus DNNs have an explicit regularization, namely the prior distributions. Furthermore, we show that the backpropagation learning algorithm is the reason for overfitting because it cannot guarantee precisely learning the prior distribution. Based on the proposed theoretical explanation for deep learning, we propose a novel regularization learning algorithm for DNNs. In contrast to most existing regularization methods reducing overfitting by decreasing the training complexity of DNNs, the proposed method reduces overfitting through training the corresponding prior distribution in a more efficient way, thereby deriving a more powerful regularization. Simulations demonstrate the proposed probabilistic representation on a synthetic dataset and validate the proposed regularization on the CIFAR-10 dataset.","PeriodicalId":436714,"journal":{"name":"2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Regularization Learning for Image Recognition\",\"authors\":\"Xinjie Lan, K. Barner\",\"doi\":\"10.1109/ICMLA.2019.00010\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In order to reduce overfitting for the image recognition application, this paper proposes a novel regularization learning algorithm for deep learning. Above all, we propose a novel probabilistic representation for explaining the architecture of Deep Neural Networks (DNNs), which demonstrates that the hidden layers close to the input formulate prior distributions, thus DNNs have an explicit regularization, namely the prior distributions. Furthermore, we show that the backpropagation learning algorithm is the reason for overfitting because it cannot guarantee precisely learning the prior distribution. Based on the proposed theoretical explanation for deep learning, we propose a novel regularization learning algorithm for DNNs. In contrast to most existing regularization methods reducing overfitting by decreasing the training complexity of DNNs, the proposed method reduces overfitting through training the corresponding prior distribution in a more efficient way, thereby deriving a more powerful regularization. Simulations demonstrate the proposed probabilistic representation on a synthetic dataset and validate the proposed regularization on the CIFAR-10 dataset.\",\"PeriodicalId\":436714,\"journal\":{\"name\":\"2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA)\",\"volume\":\"25 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICMLA.2019.00010\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICMLA.2019.00010","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
In order to reduce overfitting for the image recognition application, this paper proposes a novel regularization learning algorithm for deep learning. Above all, we propose a novel probabilistic representation for explaining the architecture of Deep Neural Networks (DNNs), which demonstrates that the hidden layers close to the input formulate prior distributions, thus DNNs have an explicit regularization, namely the prior distributions. Furthermore, we show that the backpropagation learning algorithm is the reason for overfitting because it cannot guarantee precisely learning the prior distribution. Based on the proposed theoretical explanation for deep learning, we propose a novel regularization learning algorithm for DNNs. In contrast to most existing regularization methods reducing overfitting by decreasing the training complexity of DNNs, the proposed method reduces overfitting through training the corresponding prior distribution in a more efficient way, thereby deriving a more powerful regularization. Simulations demonstrate the proposed probabilistic representation on a synthetic dataset and validate the proposed regularization on the CIFAR-10 dataset.