{"title":"利用深度卷积神经网络逼近多特征函数","authors":"Tong Mao, Zhongjie Shi, Ding-Xuan Zhou","doi":"10.1142/s0219530522400085","DOIUrl":null,"url":null,"abstract":"Deep convolutional neural networks (DCNNs) have achieved great empirical success in many fields such as natural language processing, computer vision, and pattern recognition. But there still lacks theoretical understanding of the flexibility and adaptivity of DCNNs in various learning tasks, and the power of DCNNs at feature extraction. We propose a generic DCNN structure consisting of two groups of convolutional layers associated with two downsampling operators, and a fully connected layer, which is determined only by three structural parameters. Our generic DCNNs are capable of extracting various features including not only polynomial features but also general smooth features. We also show that the curse of dimensionality can be circumvented by our DCNNs for target functions of the compositional form with (symmetric) polynomial features, spatially sparse smooth features, and interaction features. These demonstrate the expressive power of our DCNN structure, while the model selection can be relaxed comparing with other deep neural networks since there are only three hyperparameters controlling the architecture to tune.","PeriodicalId":2,"journal":{"name":"ACS Applied Bio Materials","volume":null,"pages":null},"PeriodicalIF":4.6000,"publicationDate":"2022-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Approximating functions with multi-features by deep convolutional neural networks\",\"authors\":\"Tong Mao, Zhongjie Shi, Ding-Xuan Zhou\",\"doi\":\"10.1142/s0219530522400085\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep convolutional neural networks (DCNNs) have achieved great empirical success in many fields such as natural language processing, computer vision, and pattern recognition. But there still lacks theoretical understanding of the flexibility and adaptivity of DCNNs in various learning tasks, and the power of DCNNs at feature extraction. We propose a generic DCNN structure consisting of two groups of convolutional layers associated with two downsampling operators, and a fully connected layer, which is determined only by three structural parameters. Our generic DCNNs are capable of extracting various features including not only polynomial features but also general smooth features. We also show that the curse of dimensionality can be circumvented by our DCNNs for target functions of the compositional form with (symmetric) polynomial features, spatially sparse smooth features, and interaction features. These demonstrate the expressive power of our DCNN structure, while the model selection can be relaxed comparing with other deep neural networks since there are only three hyperparameters controlling the architecture to tune.\",\"PeriodicalId\":2,\"journal\":{\"name\":\"ACS Applied Bio Materials\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":4.6000,\"publicationDate\":\"2022-11-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACS Applied Bio Materials\",\"FirstCategoryId\":\"100\",\"ListUrlMain\":\"https://doi.org/10.1142/s0219530522400085\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"MATERIALS SCIENCE, BIOMATERIALS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACS Applied Bio Materials","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1142/s0219530522400085","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MATERIALS SCIENCE, BIOMATERIALS","Score":null,"Total":0}
Approximating functions with multi-features by deep convolutional neural networks
Deep convolutional neural networks (DCNNs) have achieved great empirical success in many fields such as natural language processing, computer vision, and pattern recognition. But there still lacks theoretical understanding of the flexibility and adaptivity of DCNNs in various learning tasks, and the power of DCNNs at feature extraction. We propose a generic DCNN structure consisting of two groups of convolutional layers associated with two downsampling operators, and a fully connected layer, which is determined only by three structural parameters. Our generic DCNNs are capable of extracting various features including not only polynomial features but also general smooth features. We also show that the curse of dimensionality can be circumvented by our DCNNs for target functions of the compositional form with (symmetric) polynomial features, spatially sparse smooth features, and interaction features. These demonstrate the expressive power of our DCNN structure, while the model selection can be relaxed comparing with other deep neural networks since there are only three hyperparameters controlling the architecture to tune.