{"title":"学习表征规范化:关注多个输入模块","authors":"M. L. Rossen","doi":"10.1109/NNSP.1991.239530","DOIUrl":null,"url":null,"abstract":"A large, multi-modular neural network can be envisaged for use in a complex, multi-task application. The optimum data representation for each sub-task of such an application is often unknown and different from the optimum data representation for the other sub-tasks. A method is needed that allows a network that contains several alternate input representations to learn to focus its attention on the best representation(s) for each sub-task to be learned, without a priori information on best representation-sub-task combinations. An adaptive attention focusing method is introduced that addresses this issue. The method involves training recurrent connections for each input module to selectively attenuate input to that module that causes training error in a final target module. The method is shown to have similarities with both gating networks and anti-Hebbian learning. A task scenario is proposed for which adaptive attention focusing provides superior classification performance relative to standard training methods.<<ETX>>","PeriodicalId":354832,"journal":{"name":"Neural Networks for Signal Processing Proceedings of the 1991 IEEE Workshop","volume":"70 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1991-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Learned representation normalization: attention focusing with multiple input modules\",\"authors\":\"M. L. Rossen\",\"doi\":\"10.1109/NNSP.1991.239530\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A large, multi-modular neural network can be envisaged for use in a complex, multi-task application. The optimum data representation for each sub-task of such an application is often unknown and different from the optimum data representation for the other sub-tasks. A method is needed that allows a network that contains several alternate input representations to learn to focus its attention on the best representation(s) for each sub-task to be learned, without a priori information on best representation-sub-task combinations. An adaptive attention focusing method is introduced that addresses this issue. The method involves training recurrent connections for each input module to selectively attenuate input to that module that causes training error in a final target module. The method is shown to have similarities with both gating networks and anti-Hebbian learning. A task scenario is proposed for which adaptive attention focusing provides superior classification performance relative to standard training methods.<<ETX>>\",\"PeriodicalId\":354832,\"journal\":{\"name\":\"Neural Networks for Signal Processing Proceedings of the 1991 IEEE Workshop\",\"volume\":\"70 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1991-09-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neural Networks for Signal Processing Proceedings of the 1991 IEEE Workshop\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/NNSP.1991.239530\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks for Signal Processing Proceedings of the 1991 IEEE Workshop","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NNSP.1991.239530","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Learned representation normalization: attention focusing with multiple input modules
A large, multi-modular neural network can be envisaged for use in a complex, multi-task application. The optimum data representation for each sub-task of such an application is often unknown and different from the optimum data representation for the other sub-tasks. A method is needed that allows a network that contains several alternate input representations to learn to focus its attention on the best representation(s) for each sub-task to be learned, without a priori information on best representation-sub-task combinations. An adaptive attention focusing method is introduced that addresses this issue. The method involves training recurrent connections for each input module to selectively attenuate input to that module that causes training error in a final target module. The method is shown to have similarities with both gating networks and anti-Hebbian learning. A task scenario is proposed for which adaptive attention focusing provides superior classification performance relative to standard training methods.<>