Bum Jun Kim, Hyeyeon Choi, Hyeonah Jang, Sang Woo Kim
{"title":"深度残差网络批量归一化中的伽马正则化指南","authors":"Bum Jun Kim, Hyeyeon Choi, Hyeonah Jang, Sang Woo Kim","doi":"10.1145/3643860","DOIUrl":null,"url":null,"abstract":"<p><i>L</i><sub>2</sub> regularization for weights in neural networks is widely used as a standard training trick. In addition to weights, the use of batch normalization involves an additional trainable parameter <i>γ</i>, which acts as a scaling factor. However, <i>L</i><sub>2</sub> regularization for <i>γ</i> remains an undiscussed mystery and is applied in different ways depending on the library and practitioner. In this paper, we study whether <i>L</i><sub>2</sub> regularization for <i>γ</i> is valid. To explore this issue, we consider two approaches: 1) variance control to make the residual network behave like an identity mapping and 2) stable optimization through the improvement of effective learning rate. Through two analyses, we specify the desirable and undesirable <i>γ</i> to apply <i>L</i><sub>2</sub> regularization and propose four guidelines for managing them. In several experiments, we observed that applying <i>L</i><sub>2</sub> regularization to applicable <i>γ</i> increased 1%–4% classification accuracy, whereas applying <i>L</i><sub>2</sub> regularization to inapplicable <i>γ</i> decreased 1%–3% classification accuracy, which is consistent with our four guidelines. Our proposed guidelines were further validated through various tasks and architectures, including variants of residual networks and transformers.</p>","PeriodicalId":48967,"journal":{"name":"ACM Transactions on Intelligent Systems and Technology","volume":null,"pages":null},"PeriodicalIF":7.2000,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Guidelines for the Regularization of Gammas in Batch Normalization for Deep Residual Networks\",\"authors\":\"Bum Jun Kim, Hyeyeon Choi, Hyeonah Jang, Sang Woo Kim\",\"doi\":\"10.1145/3643860\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><i>L</i><sub>2</sub> regularization for weights in neural networks is widely used as a standard training trick. In addition to weights, the use of batch normalization involves an additional trainable parameter <i>γ</i>, which acts as a scaling factor. However, <i>L</i><sub>2</sub> regularization for <i>γ</i> remains an undiscussed mystery and is applied in different ways depending on the library and practitioner. In this paper, we study whether <i>L</i><sub>2</sub> regularization for <i>γ</i> is valid. To explore this issue, we consider two approaches: 1) variance control to make the residual network behave like an identity mapping and 2) stable optimization through the improvement of effective learning rate. Through two analyses, we specify the desirable and undesirable <i>γ</i> to apply <i>L</i><sub>2</sub> regularization and propose four guidelines for managing them. In several experiments, we observed that applying <i>L</i><sub>2</sub> regularization to applicable <i>γ</i> increased 1%–4% classification accuracy, whereas applying <i>L</i><sub>2</sub> regularization to inapplicable <i>γ</i> decreased 1%–3% classification accuracy, which is consistent with our four guidelines. Our proposed guidelines were further validated through various tasks and architectures, including variants of residual networks and transformers.</p>\",\"PeriodicalId\":48967,\"journal\":{\"name\":\"ACM Transactions on Intelligent Systems and Technology\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":7.2000,\"publicationDate\":\"2024-02-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACM Transactions on Intelligent Systems and Technology\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1145/3643860\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Intelligent Systems and Technology","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1145/3643860","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Guidelines for the Regularization of Gammas in Batch Normalization for Deep Residual Networks
L2 regularization for weights in neural networks is widely used as a standard training trick. In addition to weights, the use of batch normalization involves an additional trainable parameter γ, which acts as a scaling factor. However, L2 regularization for γ remains an undiscussed mystery and is applied in different ways depending on the library and practitioner. In this paper, we study whether L2 regularization for γ is valid. To explore this issue, we consider two approaches: 1) variance control to make the residual network behave like an identity mapping and 2) stable optimization through the improvement of effective learning rate. Through two analyses, we specify the desirable and undesirable γ to apply L2 regularization and propose four guidelines for managing them. In several experiments, we observed that applying L2 regularization to applicable γ increased 1%–4% classification accuracy, whereas applying L2 regularization to inapplicable γ decreased 1%–3% classification accuracy, which is consistent with our four guidelines. Our proposed guidelines were further validated through various tasks and architectures, including variants of residual networks and transformers.
期刊介绍:
ACM Transactions on Intelligent Systems and Technology is a scholarly journal that publishes the highest quality papers on intelligent systems, applicable algorithms and technology with a multi-disciplinary perspective. An intelligent system is one that uses artificial intelligence (AI) techniques to offer important services (e.g., as a component of a larger system) to allow integrated systems to perceive, reason, learn, and act intelligently in the real world.
ACM TIST is published quarterly (six issues a year). Each issue has 8-11 regular papers, with around 20 published journal pages or 10,000 words per paper. Additional references, proofs, graphs or detailed experiment results can be submitted as a separate appendix, while excessively lengthy papers will be rejected automatically. Authors can include online-only appendices for additional content of their published papers and are encouraged to share their code and/or data with other readers.